Sign In
Sign In

How to Parse HTML with Python

How to Parse HTML with Python
Hostman Team
Technical writer
Python
11.02.2025
Reading time: 13 min

Parsing is the automatic search for various patterns (based on pre-defined structures) in text data sources to extract specific information.

Although parsing is a broad term, it most commonly refers to the process of collecting and analyzing data from remote web resources.

In the Python programming language, you can create programs for parsing data from external websites can using two key tools:

  • Standard HTTP request package
  • External HTML markup processing libraries

However, data processing capabilities are not limited to just HTML documents.

Thanks to a wide range of external libraries in Python, you can organize parsing for documents of any complexity, whether they are arbitrary text, popular markup languages (e.g., XML), or even rare programming languages.

If there is no suitable parsing library available, you can implement it manually using low-level methods that Python provides by default, such as simple string searching or regular expressions. Although, of course, this requires additional skills.

This guide will cover how to organize parsers in Python. We will focus on extracting data from HTML pages based on specified tags and attributes.

We run all the examples in this guide using Python 3.10.12 interpreter on a Hostman cloud server with Ubuntu 22.04 and Pip 22.0.2 as the package manager.

Structure of an HTML Document

Any document written in HTML consists of two types of tags:

  1. Opening: Defined within less-than (<) and greater-than (>) symbols, e.g., <div>.

  2. Closing: Defined within less-than (<) and greater-than (>) symbols with a forward slash (/), e.g., </div>.

Each tag can have various attributes, the values of which are written in quotes after the equal sign. Some commonly used attributes include:

  • href: Link to a resource. E.g., href="https://hostman.com".
  • class: The class of an object. E.g., class="surface panel panel_closed".
  • id: Identifier of an object. E.g., id="menu".

Each tag, with or without attributes, is an element (object) of the so-called DOM (Document Object Model) tree, which is built by practically any HTML interpreter (parser).

This builds a hierarchy of elements, where nested tags are child elements to their parent tags.

For example, in a browser, we access elements and their attributes through JavaScript scripts. In Python, we use separate libraries for this purpose. The difference is that after parsing the HTML document, the browser not only constructs the DOM tree but also displays it on the monitor.

<!DOCTYPE html>

<html>
    <head>
        <title>This is the page title</title>
    </head>

    <body>
        <h1>This is a heading</h1>
        <p>This is a simple text.</p>
    </body>
</html>

The markup of this page is built with tags in a hierarchical structure without specifying any attributes:

  • html
    • head
      • title
    • body
      • h1
      • p

Such a document structure is more than enough to extract information. We can parse the data by reading the data between opening and closing tags.

However, real website tags have additional attributes that specify both the specific function of the element and its special styling (described in separate CSS files):

<!DOCTYPE html>
<html>
    <body>
        <h1 class="h1_bright">This is a heading</h1>
        <p>This is simple text.</p>

        <div class="block" href="https://hostman.com/products/cloud-server">
            <div class="block__title">Cloud Services</div>
            <div class="block__information">Cloud Servers</div>
        </div>

        <div class="block" href="https://hostman.com/products/vps-server-hosting">
            <div class="block__title">VPS Hosting</div>
            <div class="block__information">Cloud Infrastructure</div>
        </div>

        <div class="block" href="https://hostman.com/services/app-platform">
            <div class="block__title">App Platform</div>
            <div class="block__information">Apps in the Cloud</div>
        </div>
    </body>
</html>

Thus, in addition to explicitly specified tags, the required information can be refined with specific attributes, extracting only the necessary elements from the DOM tree.

HTML Data Parser Structure

Web pages can be of two types:

  • Static: During the loading and viewing of the site, the HTML markup remains unchanged. Parsing does not require emulating the browser's behavior.

  • Dynamic: During the loading and viewing of the site (Single-page application, SPA), the HTML markup is modified using JavaScript. Parsing requires emulating the browser's behavior.

Parsing static websites is relatively simple—after making a remote request, the necessary data is extracted from the received HTML document.

Parsing dynamic websites requires a more complex approach. After making a remote request, both the HTML document itself and the JavaScript scripts controlling it are downloaded to the local machine. These scripts, in turn, usually perform several remote requests automatically, loading additional content and modifying the HTML document while viewing the page.

Because of this, parsing dynamic websites requires emulating the browser’s behavior and user actions on the local machine. Without this, the necessary data simply won’t load.

Most modern websites load additional content using JavaScript scripts in one way or another.

The variety of technical implementations of modern websites is so large that they can’t be classified as entirely static or entirely dynamic.

Typically, general information is loaded initially, while specific information is loaded later.

Most HTML parsers are designed for static pages. Systems that emulate browser behavior to generate dynamic content are much less common.

In Python, libraries (packages) intended for analyzing HTML markup can be divided into two groups:

  1. Low-level processors: Compact, but syntactically complex packages with a complicated implementation that parse HTML (or XML) syntax and build a hierarchical tree of elements.

  2. High-level libraries and frameworks: Large, but syntactically concise packages with a wide range of features to extract formalized data from raw HTML documents. This group includes not only compact HTML parsers but also full-fledged systems for data scraping. Often, these packages use low-level parsers (processors) from the first group as their core for parsing.

Several low-level libraries are available for Python:

  • lxml: A low-level XML syntax processor that is also used for HTML parsing. It is based on the popular libxml2 library written in C.

  • html5lib: A Python library for HTML syntax parsing, written according to the HTML specification by WHATWG (The Web Hypertext Application Technology Working Group), which is followed by all modern browsers.

However, using high-level libraries is faster and easier—they have simpler syntax and a wider range of functions:

  • BeautifulSoup: A simple yet flexible library for Python that allows parsing HTML and XML documents by creating a full DOM tree of elements and extracting the necessary data.
  • Scrapy: A full-fledged framework for parsing data from HTML pages, consisting of autonomous “spiders” (web crawlers) with pre-defined instructions.
  • Selectolax: A fast HTML page parser that uses CSS selectors to extract information from tags.
  • Parsel: A Python library with a specific selector syntax that allows you to extract data from HTML, JSON, and XML documents.
  • requests-html: A Python library that closely mimics browser CSS selectors written in JavaScript.

This guide will review several of these high-level libraries.

Installing the pip Package Manager

We can install all parsing libraries (as well as many other packages) in Python through the standard package manager, pip, which needs to be installed separately.

First, update the list of available repositories:

sudo apt update

Then, install pip using the APT package manager:

sudo apt install python3-pip -y

The -y flag will automatically confirm all terminal prompts during the installation.

To verify that pip was installed correctly, check its version:

pip3 --version

The terminal will display the pip version and the installation path:

pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)

As shown, this guide uses pip version 22.0.2.

Installing the HTTP Requests Package

Usually, the default Python interpreter includes the Requests package, which allows making requests to remote servers. We will use it in the examples of this guide.

However, in some cases, it might not be installed. Then, you can manually install requests via pip:

pip install requests

If the system already has it, you will see the following message in the terminal:

Requirement already satisfied: requests in /usr/lib/python3/dist-packages (2.25.1)

Otherwise, the command will add requests to the list of available packages for import in Python scripts.

Using BeautifulSoup

To install BeautifulSoup version 4, use pip:

pip install beautifulsoup4

After this, the library will be available for import in Python scripts. However, it also requires the previously mentioned low-level HTML processors to work properly.

First, install lxml:

pip install lxml

Then install html5lib:

pip install html5lib

In the future, you can specify one of these processors as the core parser for BeautifulSoup in your Python code.

Create a new file in your home directory:

nano bs.py

Add the following code:

import requests
from bs4 import BeautifulSoup

# Request to the website 'https://hostman.com'
response = requests.get('https://hostman.com')

# Parse the HTML content of the page using 'html5lib' parser
page = BeautifulSoup(response.text, 'html5lib')

# Extract the title of the page
pageTitle = page.find('title')
print(pageTitle)
print(pageTitle.string)

print("")

# Extract all <a> links on the page
pageParagraphs = page.find_all('a')

# Print the content of the first 3 links (if they exist)
for i, link in enumerate(pageParagraphs[:3]):
    print(link.string)

print("")

# Find all div elements with a class starting with 'socials--'
social_links_containers = page.find_all('div', class_=lambda c: c and c.startswith('socials--'))

# Collect the links from these divs
for container in social_links_containers:
    links = container.find_all('a', href=True)
    for link in links:
        href = link['href']

        # Ignore links related to Cloudflare's email protection
        if href.startswith('/cdn-cgi/l/email-protection'):
            continue

        print(href)

Now run the script:

python bs.py

This will produce the following console output:

<title>Hostman - Cloud Service Provider with a Global Cloud Infrastructure</title>
Hostman - Cloud Service Provider with a Global Cloud Infrastructure

Partners
Tutorials
API

https://wa.me/35795959804
https://twitter.com/hostman_com
https://www.facebook.com/profile.php?id=61556075738626
https://github.com/hostman-cloud
https://www.linkedin.com/company/hostman-inc/about/
https://www.reddit.com/r/Hostman_com/

Of course, instead of html5lib, you can specify lxml:

page = BeautifulSoup(response.text, 'lxml')

However, it is best to use the html5lib library as the processor. Unlike lxml, which is specifically designed for working with XML markup, html5lib has full support for modern HTML5 standards.

Despite the fact that the BeautifulSoup library has a concise syntax, it does not support browser emulation, meaning it cannot dynamically load content.

Using Scrapy

The Scrapy framework is implemented in a more object-oriented manner. In Scrapy, website parsing is based on three core entities:

  • Spiders: Classes that contain information about parsing details for specified websites, including URLs, element selectors (CSS or XPath), and page browsing mechanisms.

  • Items: Variables for storing extracted data, which are more complex forms of Python dictionaries with a special internal structure.

  • Pipelines: Intermediate handlers for extracted data that can modify items and interact with external software (such as databases).

You can install Scrapy through the pip package manager:

pip install scrapy

After that, you need to initialize a parser project, which creates a separate directory with its own folder structure and configuration files:

scrapy startproject parser

Now, you can navigate to the newly created directory:

cd parser

Check the contents of the current directory:

ls

It has a general configuration file and a directory with project source files:

parser scrapy.cfg

Move to the source files directory:

cd parser

If you check its contents:

ls

You will see both special Python scripts, each performing its function, and a separate directory for spiders:

__init__.py items.py middlewares.py pipelines.py settings.py spiders

Let's open the settings file:

nano settings.py

By default, most parameters are commented out with the hash symbol (#). For the parser to work correctly, you need to uncomment some of these parameters without changing the default values specified in the file:

  • USER_AGENT
  • ROBOTSTXT_OBEY
  • CONCURRENT_REQUESTS
  • DOWNLOAD_DELAY
  • COOKIES_ENABLED

Each specific project will require a more precise configuration of the framework. You can find all available parameters in the official documentation.

After that, you can generate a new spider:

scrapy genspider hostmanspider hostman.com

After running the above command, the console should display a message about the creation of a new spider:

Created spider ‘hostmanspider' using template 'basic' in module:
parser.spiders.hostmanspider

Now, if you check the contents of the spiders directory:

ls spiders

You will see the empty source files for the new spider:

__init__.py  __pycache__  hostmanspider.py

Let's open the script file:

nano spiders/hostmanspider.py

And fill it with the following code:

from pathlib import Path  # Package for working with files
import scrapy  # Package from the Scrapy framework

class HostmanSpider(scrapy.Spider):  # Spider class inherits from the Spider class
        name = 'hostmanspider'  # Name of the spider

        def start_requests(self):
                urls = ["https://hostman.com"]
                for url in urls:
                        yield scrapy.Request(url=url, callback=self.parse)

        def parse(self, response):
                open("output", "w").close()  # Clear the content of the 'output' file
                someFile = open("output", "a")  # Create (or append to) a new file

                dataTitle = response.css("title::text").get()  # Extract the title from the server response using a CSS selector

                dataA = response.css("a").getall()  # Extract the first 3 links from the server response using a CSS selector 

                someFile.write(dataTitle + "\n\n")
                for i in range(3): 
                    someFile.write(dataA[i] + "\n")
                someFile.close()

You can now run the created spider with the following command:

scrapy crawl hostmanspider

Running the spider will create an output file in the current directory. To view the contents of this file, you can use:

cat output

The content of this file will look something like this:

Hostman - Cloud Service Provider with a Global Cloud Infrastructure

<a href="/partners/" itemprop="url" class="body4 medium nd-link-primary"><span itemprop="name">Partners</span></a>
<a href="/tutorials/" itemprop="url" class="body4 medium nd-link-primary"><span itemprop="name">Tutorials</span></a>
<a href="/api-docs/" itemprop="url" class="body4 medium nd-link-primary"><span itemprop="name">API</span></a>

You can find more detailed information on extracting data using selectors (both CSS and XPath) can be found in the official Scrapy documentation.

Conclusion

Data parsing from remote sources in Python is made possible by two main components:

  1. A package for making remote requests
  2. Libraries for parsing data

These libraries can range from simple ones, suitable only for parsing static websites, to more complex ones that can emulate browser behavior and, consequently, parse dynamic websites.

In Python, the most popular libraries for parsing static data are:

  • BeautifulSoup
  • Scrapy

These tools, similar to JavaScript functions (e.g., getElementsByClassName() using CSS selectors), allow us to extract data (attributes and text) from the DOM tree elements of any HTML document.

Python
11.02.2025
Reading time: 13 min

Similar

Python

How to Get the Length of a List in Python

Lists in Python are used almost everywhere. In this tutorial we will look at four ways to find the length of a Python list: by using built‑in functions, recursion, and a loop. Knowing the length of a list is most often required to iterate through it and perform various operations on it. len() function len() is a built‑in Python function for finding the length of a list. It takes one argument—the list itself—and returns an integer equal to the list’s length. The same function also works with other iterable objects, such as strings. Country_list = ["The United States of America", "Cyprus", "Netherlands", "Germany"] count = len(Country_list) print("There are", count, "countries") Output: There are 4 countries Finding the Length of a List with a Loop You can determine a list’s length in Python with a for loop. The idea is to traverse the entire list while incrementing a counter by  1 on each iteration. Let’s wrap this in a separate function: def list_length(list): counter = 0 for i in list: counter = counter + 1 return counter Country_list = ["The United States of America", "Cyprus", "Netherlands", "Germany", "Japan"] count = list_length(Country_list) print("There are", count, "countries") Output: There are 5 countries Finding the Length of a List with Recursion The same task can be solved with recursion: def list_length_recursive(list): if not list: return 0 return 1 + list_length_recursive(list[1:]) Country_list = ["The United States of America", "Cyprus", "Netherlands","Germany", "Japan", "Poland"] count = list_length_recursive(Country_list) print("There are", count, "countries") Output: There are 6 countries How it works. The function list_length_recursive() receives a list as input. If the list is empty, it returns 0—the length of an empty list. Otherwise it calls itself recursively with the argument list[1:], a slice of the original list starting from index 1 (i.e., the list without the element at index 0). The result of that call is added to 1. With each recursive step the returned value grows by one while the list shrinks by one element. length_hint() function The length_hint() function lives in the operator module. That module contains functions analogous to Python’s internal operators: addition, subtraction, comparison, and so on. length_hint() returns the length of iterable objects such as strings, tuples, dictionaries, and lists. It works similarly to len(): from operator import length_hint Country_list = ["The United States of America", "Cyprus", "Netherlands","Germany", "Japan", "Poland", "Sweden"] count = length_hint(Country_list) print("There are", count, "countries") Output: There are 7 countries Note that length_hint() must be imported before use. Conclusion In this guide we covered four ways to determine the length of a list in Python. Under equal conditions the most efficient method is len(). The other approaches are justified mainly when you are implementing custom classes similar to list.
17 July 2025 · 3 min to read
Python

Understanding the main() Function in Python

In any complex program, it’s crucial to organize the code properly: define a starting point and separate its logical components. In Python, modules can be executed on their own or imported into other modules, so a well‑designed program must detect the execution context and adjust its behavior accordingly.  Separating run‑time code from import‑time code prevents premature execution, and having a single entry point makes it easier to configure launch parameters, pass command‑line arguments, and set up tests. When all important logic is gathered in one place, adding automated tests and rolling out new features becomes much more convenient.  For exactly these reasons it is common in Python to create a dedicated function that is called only when the script is run directly. Thanks to it, the code stays clean, modular, and controllable. That function, usually named main(), is the focus of this article. All examples were executed with Python 3.10.12 on a Hostman cloud server running Ubuntu 22.04. Each script was placed in a separate .py file (e.g., script.py) and started with: python script.py The scripts are written so they can be run just as easily in any online Python compiler for quick demonstrations. What Is the main() Function in Python The simplest Python code might look like: print("Hello, world!")  # direct execution Or a script might execute statements in sequence at file level: print("Hello, world!")       # action #1 print("How are you, world?") # action #2 print("Good‑bye, world...")  # action #3 That trivial arrangement works only for the simplest scripts. As a program grows, the logic quickly becomes tangled and demands re‑organization: # function containing the program’s main logic (entry point) def main():     print("Hello, world!") # launch the main logic if __name__ == "__main__":     main()                    # call the function with the main logic With more actions the code might look like: def main(): print("Hello, world!") print("How are you, world?") print("Good‑bye, world...") if __name__ == "__main__": main() This implementation has several important aspects, discussed below. The main() Function The core program logic lives inside a separate function. Although the name can be anything, developers usually choose main, mirroring C, C++, Java, and other languages.  Both helper code and the main logic are encapsulated: nothing sits “naked” at file scope. # greeting helper def greet(name): print(f"Hello, {name}!") # program logic def main(): name = input("Enter your name: ") greet(name) # launch the program if __name__ == "__main__": main() Thus main() acts as the entry point just as in many other languages. The if __name__ == "__main__" Check Before calling main() comes the somewhat odd construct if __name__ == "__main__":.  Its purpose is to split running from importing logic: If the script runs directly, the code inside the if block executes. If the script is imported, the block is skipped. Inside that block, you can put any code—not only the main() call: if __name__ == "__main__":     print("Any code can live here, not only main()") __name__ is one of Python’s built‑in “dunder” (double‑underscore) variables, often called magic or special. All dunder objects are defined and used internally by Python, but regular users can read them too. Depending on the context, __name__ holds: "__main__" when the module runs as a standalone script. The module’s own name when it is imported elsewhere. This lets a module discover its execution context. Advantages of Using  main() Organization Helper functions and classes, as well as the main function, are wrapped separately, making them easy to find and read. Global code is minimal—only initialization stays at file scope: def process_data(data): return [d * 2 for d in data] def main(): raw = [1, 2, 3, 4] result = process_data(raw) print("Result:", result) if __name__ == "__main__": main() A consistent style means no data manipulation happens at the file level. Even in a large script you can quickly locate the start of execution and any auxiliary sections. Isolation When code is written directly at the module level, every temporary variable, file handle, or connection lives in the global namespace, which can be painful for debugging and testing. Importing such a module pollutes the importer’s globals: # executes immediately on import values = [2, 4, 6] doubles = [] for v in values: doubles.append(v * 2) print("Doubled values:", doubles) With main() everything is local; when the function returns, its variables vanish: def double_list(items): return [x * 2 for x in items] # create a new list with doubled elements def main(): values = [2, 4, 6] result = double_list(values) print("Doubled values:", result) if __name__ == "__main__": main() That’s invaluable for unit testing, where you might run specific functions (including  main()) without triggering the whole program. Safety Without the __name__ check, top‑level code runs even on import—usually undesirable and potentially harmful. some.py: print("This code will execute even on import!") def useful_function(): return 42 main.py: import some print("The logic of the imported module executed itself...") Console: This code will execute even on import! The logic of the imported module executed itself... The safer some.py: def useful_function():     return 42 def main():     print("This code will not run on import") main() plus the __name__ check guard against accidental execution. Inside main() you can also verify user permissions or environment variables. How to Write main() in Python Remember: main() is not a language construct, just a regular function promoted to “entry point.” To ensure it runs only when the script starts directly: Tools – define helper functions with business logic. Logic – assemble them inside main() in the desired order. Check – add the if __name__ == "__main__" guard.  This template yields structured, import‑safe, test‑friendly code—excellent practice for any sizable Python project. Example Python Program Using main() # import the standard counter from collections import Counter # runs no matter how the program starts print("The text‑analysis program is active") # text‑analysis helper def analyze_text(text): words = text.split() # split text into words total = len(words) # total word count unique = len(set(words)) # unique word count avg_len = sum(len(w) for w in words) / total if total else 0 freq = Counter(words) # build frequency counter top3 = freq.most_common(3) # top three words return { 'total': total, 'unique': unique, 'avg_len': avg_len, 'top3': top3 } # program’s main logic def main(): print("Enter text (multiple lines). Press Enter on an empty line to finish:") lines = [] while True: line = input() if not line: break lines.append(line) text = ' '.join(lines) stats = analyze_text(text) print(f"\nTotal number of words: {stats['total']}") print(f"Unique words: {stats['unique']}") print(f"Average word length: {stats['avg_len']:.2f}") print("Top‑3 most frequent words:") for word, count in stats['top3']: print(f" {word!r}: {count} time(s)") # launch program if __name__ == "__main__": main() Running the script prints a prompt: Enter text (multiple lines). Press Enter on an empty line to finish: Input first line: Star cruiser Orion glided silently through the darkness of intergalactic space. Second line: Signals of unknown life‑forms flashed on the onboard sensors where the nebula glowed with a phosphorescent light. Third line: The cruiser checked the sensors, then the cruiser activated the defense system, and the cruiser returned to its course. Console output: The text‑analysis program is active Total number of words: 47 Unique words: 37 Average word length: 5.68 Top‑3 most frequent words: 'the': 7 time(s) 'cruiser': 4 time(s) 'of': 2 time(s) If you import this program (file program.py) elsewhere: import program         # importing program.py Only the code outside main() runs: The text‑analysis program is active So, a moderately complex text‑analysis utility achieves clear logic separation and context detection. When to Use main() and When Not To Use  main() (almost always appropriate) when: Medium/large scripts – significant code with non‑trivial logic, multiple functions/classes. Libraries or CLI utilities – you want parts of the module importable without side effects. Autotests – you need to test pure logic without extra boilerplate. You can skip main() when: Tiny one‑off scripts – trivial logic for a quick data tweak. Educational snippets – short examples illustrating a few syntax features. In short, if your Python program is a standalone utility or app with multiple processing stages, command‑line arguments, and external resources—introduce  main(). If it’s a small throw‑away script, omitting main() keeps things concise. Conclusion The  main() function in Python serves two critical purposes: Isolates the program’s core logic from the global namespace. Separates standalone‑execution logic from import logic. Thus, a Python file evolves from a straightforward script of sequential actions into a fully‑fledged program with an entry point, encapsulated logic, and the ability to detect its runtime environment.
14 July 2025 · 8 min to read
Python

Python Static Method

A static method in Python is bound to the class itself rather than any instance of that class. So, you can call it without first creating an object and without access to instance data (self).  To create a static method we need to use a decorator, specifically @staticmethod. It will tell Python to call the method on the class rather than an instance. Static methods are excellent for utility or helper functions that are logically connected to the class but don't need to access any of its properties.  When To Use & Not to Use a Python Static Method Static methods are frequently used in real-world code for tasks like input validation, data formatting, and calculations—especially when that logic naturally belongs with a class but doesn't need its state. Here's an example from a User class that checks email format: class User: @staticmethod def is_valid_email(email): return "@" in email and "." in email This method doesn't depend on any part of the User instance, but conceptually belongs in the class. It can be used anywhere as User.is_valid_email(email), keeping your code cleaner and more organized. If the logic requires access to or modification of instance attributes or class-level data, avoid using a static method as it won't help here. For instance, if you are working with user settings or need to monitor object creation, you will require a class method or an instance method instead. class Counter: count = 0 @classmethod def increment(cls): cls.count += 1 In this example, using a static method would prevent access to cls.count, making it useless for this kind of task. Python Static Method vs Class Method Though they look similar, class and static methods in Python have different uses; so, let's now quickly review their differences. Defined inside a class, a class method is connected to that class rather than an instance. Conventionally called cls, the class itself is the first parameter; so, it can access and change class-level data. Factory patterns, alternate constructors, or any activity applicable to the class as a whole and not individual instances are often implemented via class methods. Conversely, a static method is defined within a class but does not start with either self or cls parameters. It is just a regular function that “lives” inside a class but doesn’t interact with the class or its instances. For utility tasks that are conceptually related to the class but don’t depend on its state, static methods are perfect. Here's a quick breakdown of the Python class/static methods differences: Feature Class Method Static Method Binding Bound to the class Not bound to class or instance First parameter cls (class itself) None (no self or cls) Access to class/instance data Yes No Common use cases Factory methods, class-level behavior Utility/helper functions Decorator @classmethod @staticmethod Python Static Method vs Regular Functions You might ask: why not just define a function outside the class instead of using a static method? The answer is structure. A static method keeps related logic grouped within the class, even if it doesn't interact with the class or its instances. # Regular function def is_even(n): return n % 2 == 0 # Static method inside a class class NumberUtils: @staticmethod def is_even(n): return n % 2 == 0 Both functions do the same thing, but placing is_even inside NumberUtils helps keep utility logic organized and easier to find later. Let’s proceed to the hands-on Python static method examples. Example #1 Imagine that we have a MathUtils class that contains a static method for calculating the factorial: class MathUtils: @staticmethod def factorial(n): if n == 0: return 1 else: return n * MathUtils.factorial(n-1) Next, let's enter: print(MathUtils.factorial(5))120 We get the factorial of 5, which is 120. Here, the factorial static method does not use any attributes of the class instance, only the input argument n. And we called it using the MathUtils.factorial(n) syntax without creating an instance of the MathUtils class. In Python, static methods apply not only in classes but also in modules and packages. The @staticmethod decorator marks a function you define inside a class if it does not interact with instance-specific data. The function exists on its own; it is related to the class logically but is independent of its internal state. Managed solution for Backend development Example #2 Let's say we're working with a StringUtils module with a static method for checking if a string is a palindrome. The code will be: def is_palindrome(string):    return string == string[::-1] This function doesn't rely on any instance-specific data — it simply performs a check on the input. That makes it a good candidate for a static method. To organize it within a class and signal that it doesn't depend on the class state, we can use the @staticmethod decorator like this: class StringUtils:    @staticmethod    def is_palindrome(string):       return string == string[::-1] Let's enter for verification: print(StringUtils.is_palindrome("deed"))True print(StringUtils.is_palindrome("deer"))False That's correct, the first word is a palindrome, so the interpreter outputs True, but the second word is not, and we get False. So, we can call the is_palindrome method through the StringUtils class using the StringUtils.is_palindrome(string) syntax instead of importing the is_palindrome function and calling it directly. - Python static method and class instance also differ in that the static cannot affect the state of an instance. Since they do not have access to the instance, they cannot alter attribute values, which makes sense. Instance methods are how one may modify the instance state of a class. Example #3 Let's look at another example. Suppose we have a Person class that has an age attribute and a static is_adult method that checks the value against the age of majority: class Person:    def __init__(self, age):        self.age = age    @staticmethod    def is_adult(age):       return age >= 21 Next, let's create an age variable with a value of 24, call the is_adult static method from the Person class with this value and store its result in the is_adult variable, like this: age = 24is_adult = Person.is_adult(age) Now to test this, let's enter: print(is_adult)True Since the age matches the condition specified in the static method, we get True. In the example, the is_adult static method serves as an auxiliary tool—a helper function—accepting the age argument but without access to the age attribute of the Person class instance. Conclusion Static methods improve code readability and make it possible to reuse it. They are also more convenient when compared to standard Python functions. Static methods are convenient as, unlike functions, they do not call for a separate import. Therefore, applying Python class static methods can help you streamline and work with your code greatly. And, as you've probably seen from the examples above, they are quite easy to master. On our app platform you can find Python applications, such as Celery, Django, FastAPI and Flask. 
16 April 2025 · 6 min to read

Do you have questions,
comments, or concerns?

Our professionals are available to assist you at any moment,
whether you need help or are just unsure of where to start.
Email us
Hostman's Support