MockIT logo on dark background.
Schedule interviewBlog
Log In
MockIT logo on dark background.
MockIT logo.

Do you get the feeling that no one is reading your CV? Not getting invitations to recruitment interviews?

Test your technical skills and let MockIT help find you a job in IT!

Book an interview
Back

Python Developer - Advanced Job Interview Questions and Tasks

In this article, we will look at the questions you can expect during a job interview for positions that already require some Python experience.

Michał Kandybowicz

Backend Team Lead - Rebell Pay


June 24, 2024

Intro

Python is a very popular programming language, known for its simplicity, readability and wide applicability. Whether you are a novice programmer just getting started with coding or an experienced developer looking for answers to specific technical questions, there is something for you in this article.

In it, I will answer nine technical questions that arise in recruitment interviews for the position of Python Developer. Recruiters love to ask tricky questions. Therefore, familiarising yourself with these questions will help you prepare for recruitment interviews. What's more, the answers to them should be familiar to every current and future Python developer.

What are the differences between list comprehensions and generator expressions?

List comprehensions and generator expressions are two constructs in Python for creating data sequences. Although they have very similar syntax, they differ in terms of performance, speed, and behavior.

Object Creation

List Comprehensions: Create a complete list in memory. This means all elements are generated at once and stored in memory.

Generator Expressions: Create a generator that produces elements on the fly, one by one, as needed. It does not create a complete list in memory.

Memory Management

List Comprehensions: May be less efficient if the list is large because the entire list is stored in memory.

Generator Expressions: More memory efficient because elements are generated 'lazily' (lazy evaluation) and not stored all at once in memory.

Execution Time

List Comprehensions: Tend to be faster if you need to process all elements because the entire list is available immediately.

Generator Expressions: May be slower in processing all elements because elements are generated on-demand, adding overhead for each 'next' operation.

When to Use

List Comprehensions: Ideal when you need access to all elements at once or when the resulting list will be used multiple times.

Generator Expressions: Better when working with large datasets or data streams that cannot be loaded entirely into memory or when you want to process elements only once.

# List comprehension list_comp = [x * 2 for x in range(10)] print(list_comp) # [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] # Iterating over List comprehension for x in list_comp: print(x) # 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 (one by one) # Generator expression gen_exp = (x * 2 for x in range(10)) print(gen_exp) # <generator object <genexpr> at 0x...> # Iterating over generator for x in gen_exp: print(x) # 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 (one by one)

How does memory management work in Python? How does garbage collection (GC) work and when can it be manually invoked?

Memory management in Python is responsible for allocating and freeing memory in applications written in this language. Here's how it works in simple terms.

Memory Allocation

When you create a new object (e.g., number, list, dictionary), Python automatically allocates memory for it. You don't have to worry about managing this memory manually.

Memory Deallocation

When an object is no longer needed (i.e., no variable refers to it), the memory it occupied should be freed so it can be used by other objects.

Garbage Collection

Garbage collection is a mechanism that automatically frees memory occupied by objects that are no longer in use. In Python, it works in two main ways.

Reference Counting

Every object has a counter that indicates how many variables (references) refer to it. When the counter drops to zero, the object is deleted, and the memory is freed.

Cyclic Dependencies

Python has a mechanism for detecting and removing cyclic dependencies (e.g., two objects referring to each other but not used in the code). It uses a generational garbage collector that periodically checks objects for cycles.

Manual Garbage Collection Invocation

Although Python automatically manages memory, it is possible to invoke it manually using the gc module.

import gc # Function to create a cycle of references def create_cycle(): list_a = [] list_b = [list_a] list_a.append(list_b) # After the end of the function, list_a and list_b still exist in memory as a cycle of references create_cycle() # Before running GC print(f"Objects in memory: {len(gc.get_objects())}") # Running GC manually gc.collect() # After running GC print(f"Objects in memory: {len(gc.get_objects())}")

How are decorators implemented in Python, and in what cases are they particularly useful?

Decorators in Python are special functions that modify the behavior of other functions or methods. They allow 'wrapping' one function with another, providing additional functionality before or after executing the original function without changing its source code.

Decorators are particularly useful in the following cases.

Logging and Monitoring

Enable tracking function calls, which is useful for debugging and monitoring application performance.

def simple_decorator(func): def wrapper(): print("Before running func") func() print("After running func") return wrapper @simple_decorator def say_hello(): print("Hello!") say_hello()

Authorization and Authentication

Can check if the user has the appropriate permissions before executing a specific function.

def requires_auth(func): def wrapper(user, *args, **kwargs): if not user.get('is_authenticated'): raise PermissionError("User is not authenticated") return func(user, *args, **kwargs) return wrapper @requires_auth def access_secure_data(user): return "Secure Data" user = {'is_authenticated': True} print(access_secure_data(user)) user = {'is_authenticated': False} try: print(access_secure_data(user)) except PermissionError as e: print(e)

Input/Output Modification

Allow modifying arguments passed to a function and results returned by the function.

def uppercase_output(func): def wrapper(*args, **kwargs): result = func(*args, **kwargs) return result.upper() return wrapper @uppercase_output def greet(name): return f"Hello, {name}" print(greet("world"))

Caching Results

Speed up function execution by remembering results for already computed arguments.

def cache(func): storage = {} def wrapper(*args): if args in storage: return storage[args] result = func(*args) storage[args] = result return result return wrapper @cache def fibonacci(n): if n < 2: return n return fibonacci(n-1) + fibonacci(n-2) print(fibonacci(10))

Access Control

Can restrict access to functions, e.g., based on time (call limits).

import time def rate_limit(func): last_called = [0] # We use a list to store a variable in a closure def wrapper(*args, **kwargs): now = time.time() if now - last_called[0] < 1: # Limits calls to 1 per second raise RuntimeError("Function called too frequently") last_called[0] = now return func(*args, **kwargs) return wrapper @rate_limit def do_something(): print("Function executed") try: do_something() time.sleep(0.5) do_something() except RuntimeError as e: print(e) time.sleep(1) do_something()

What techniques do we use for profiling Python code to identify bottlenecks?

Profiling Python code is key to identifying performance bottlenecks and optimizing applications. There are many solutions available, the simplest being the 'stopwatch' with timeit, which is perfect for microbenchmarks, i.e., measuring the execution time of small code fragments.

import timeit def test_func(): sum = 0 for i in range(10000): sum += i return sum time = timeit.timeit('test_func()', globals=globals(), number=1000) print(f"Execution time: {time:.5f} seconds")

cProfile is a built-in tool in Python for profiling. It can be used to measure function execution time and identify the most expensive parts of the code.

import cProfile import pstats def profiling_func(): # Example profiling function sum = 0 for i in range(10000): sum += i return sum cProfile.run('profiling_func()', 'profil_output') # Displaying results in readable form p = pstats.Stats('profil_output') p.sort_stats('cumulative').print_stats(10)

Additionally, you can use more advanced tools.

line_profiler

This tool profiles code line by line. It is particularly useful for precise analysis of the execution time of individual lines of code. With it, you can identify specific lines of code that are the most expensive in terms of execution time.

memory_profiler

Allows precise monitoring of memory usage during program execution, which is essential for identifying memory management issues and optimizing performance.

Py-Spy

Enables monitoring CPU usage by the application and identifying hotspots, i.e., code fragments that consume the most CPU resources. It is a useful tool for performance diagnostics and identifying bottlenecks in the application.

Describe how getitem, setitem, and delitem work in the context of creating custom container types.

__getitem__, __setitem__, and __delitem__ are special methods in Python that allow access, setting, and deleting elements from custom container types, such as classes that behave like lists, dictionaries, or other containers.

__getitem__(self, key)

This method is called when trying to access a container element using the indexing operator []. It accepts one argument, which is the key or index used to access the element.

class MyList: def __init__(self, data): self.data = data def __getitem__(self, index): return self.data[index] my_list = MyList([1, 2, 3, 4, 5]) print(my_list[2]) # Displays: 3

__setitem__(self, key, value)

This method is called when trying to assign a value to a container element using the indexing operator []. It accepts two arguments: the key or index and the value to be assigned.

class MyList: def __init__(self, data): self.data = data def __setitem__(self, index, value): self.data[index] = value my_list = MyList([1, 2, 3, 4, 5]) my_list[2] = 10 print(my_list.data) # Displays: [1, 2, 10, 4, 5]

__delitem__(self, key)

This method is called when we try to remove an element from the container using the del operator and indexing []. It accepts only one argument: the key or index of the element we want to remove.

class MyList: def __init__(self, data): self.data = data def __delitem__(self, index): del self.data[index] my_list = MyList([1, 2, 3, 4, 5]) del my_list[2] print(my_list.data) # Displays: [1, 2, 4, 5]

Thanks to these special methods, it is possible to create custom container types in Python that can behave like built-in types such as lists or dictionaries. They allow for customizing the container's behavior according to individual needs, for example through custom operations when getting, setting, or removing elements.

How do regular expressions work in Python? What are the most commonly used functions in the re module?

Regular expressions in Python are tools for manipulating and processing text using patterns. The re module in Python provides functions and objects for handling regular expressions.

How do regular expressions work?

Regular expressions describe patterns of characters that are used to search and manipulate texts. They can contain special characters as well as ordinary characters that define what patterns are matched in the text.

Most commonly used functions in the re module

re.search(pattern, string, flags=0): Searches for the first match of the pattern in the entire text.

import re result = re.search(r'is', 'This is a test string.') print(result) # <re.Match object; span=(2, 4), match='is'>

re.match(pattern, string, flags=0): Checks if the pattern matches the beginning of the text.

import re result = re.match(r'This', 'This is a test string.') print(result) # <re.Match object; span=(0, 4), match='This'>

re.findall(pattern, string, flags=0): Finds all matches of the pattern in the text and returns a list of results.

import re result = re.findall(r'd+', 'There are 10 apples and 20 oranges.') print(result) # ['10', '20']

re.sub(pattern, repl, string, count=0, flags=0): Replaces all occurrences of the pattern in the text with a given string.

import re result = re.sub(r's+', '_', 'This is a test string.') print(result) # This_is_a_test_string.

These functions are the basic tools for working with regular expressions in Python. They allow for searching, matching, splitting, and replacing text according to specified patterns.

Describe the differences between procedural, object-oriented, and functional approaches in Python programming.

Procedural approach

In the procedural approach, the program is structurally organized around procedures (functions) that perform specific tasks on data. The code is sequential and executed from top to bottom. The main features of the procedural approach are:

Procedures: The basic elements are procedures (functions) that perform specific tasks on data.

Global variables: Data is usually stored as global variables accessible to procedures.

Simple data structures: Simple data structures such as lists, tuples, dictionaries, etc., are used.

Simplicity: An approach that is easy to understand and apply in small projects.

def count_sum(a, b): return a + b a = 10 b = 20 sum = count_sum(a, b) print(sum)

Object-oriented approach

In the object-oriented approach, the program is organized around objects, which are instances of classes. These objects store data (attributes) and methods (functions) that can operate on these data. The main features of the object-oriented approach are:

Classes and objects: The code is organized around classes that define the structure and behavior of objects.

Encapsulation: Classes can hide their internal implementations, and access to data can be controlled through accessor methods (getter and setter).

Inheritance: Classes can inherit features and methods from other classes.

PolymorphismObjects of different classes can be treated uniformly, allowing the same operations to be performed on different types of data.

class Calculator: def __init__(self, a, b): self.a = a self.b = b def count_sum(self): return self.a + self.b calculator = Calculator(10, 20) sum = calculator.count_sum() print(sum)

Functional approach

In the functional approach, the program is organized around functions that are treated as first-class objects. These functions can be passed as arguments to other functions, returned as values from functions, and stored as variables. The main features of the functional approach are:

First-class functions: Functions can be assigned to variables, passed as arguments, and returned as values.

Statelessness: Functions do not have an internal state and operate only on their arguments.

Avoiding side effects: Strives to avoid side effects, meaning functions should not modify data outside their scope.

Recursion: Often uses recursion for iteration and data processing.

In Python, you can combine these three approaches in one program depending on the needs and design preferences. However, Python usually encourages the use of the object-oriented approach due to its flexibility and code reusability.

What are the differences between @property and direct access to class attributes? What are the advantages of using @property?

@property is a decorator in Python that allows defining accessor methods for class attributes. This enables controlling access to attributes and performing additional actions when reading or writing attribute values. Here are the differences between using @property and direct access to class attributes:

Direct access to class attributes

class MyClass: def __init__(self): self._attribute = None my_object = MyClass() my_object._attribute = "value" print(my_object._attribute)

Using @property

class MyClass: def __init__(self): self._attribute = None @property def attribute(self): return self._attribute my_object = MyClass() my_object.attribute = "wartość" # Throws error AttributeError print(my_object.attribute)

Advantages of using @property

Access control: Allows controlling access to class attributes, enabling defining custom actions when reading and writing attribute values.

Encapsulation: Helps in encapsulating data, meaning that the internal implementation of attributes can be hidden, and safe access can be provided.

Interface customization: Enables defining the class interface in a more understandable and user-friendly way.

Easy to understand and maintain: Helps in creating more understandable and flexible code that is easier to maintain and extend in the future.

How does itertools work and what are the most commonly used functions of this module?

Itertools is a module in Python that provides a set of tools for creating efficient iterators. Iterators are objects that allow iterating over sequences of data while occupying minimal memory. itertools offers many useful functions for manipulating iterators and generating iterable data sequences.

Some of the most commonly used functions from the itertools module

itertools.chain(*iterables): The chain function combines multiple iterable objects into one long iterator.

import itertools list1 = [1, 2, 3] list2 = ['a', 'b', 'c'] chain = itertools.chain(list1, list2) for element in chain: print(element)

itertools.cycle(iterable): The cycle function creates an infinite iterator that repeatedly cycles through elements from the given iterable object.

import itertools cycle = itertools.cycle([1, 2, 3]) for _ in range(5): print(next(cycle))

itertools.count(start=0, step=1): The count function creates an infinite iterator that generates numbers starting from the start value with a specified step.

import itertools counter = itertools.count(start=1, step=2) for _ in range(5): print(next(counter))

itertools.product(*iterables, repeat=1): The product function creates an iterator containing Cartesian products of elements from the given iterable objects.

import itertools products = itertools.product('AB', repeat=2) for product in products: print(product)

itertools.permutations(iterable, r=None): The permutations function generates all possible permutations of elements from the given iterable object.

import itertools perms = itertools.permutations([1, 2, 3], 2) for perm in perms: print(perm)

itertools.groupby(iterable, key=None): The groupby function groups elements from a given iterable based on a key, which is a function that specifies the grouping key.

import itertools data = [('a', 1), ('b', 2), ('a', 3), ('b', 4)] groups = itertools.groupby(data, key=lambda x: x[0]) for key, group in groups: print(key, list(group))

Summary

I hope the answers to these technical questions related to Python programming have proven helpful and clarified some of the more complex aspects of the language. Python, thanks to its flexibility, remains one of the most important tools in the arsenal of programmers around the world.

Remember that learning programming is a continuous process, and every new piece of information and every solved challenge brings you closer to becoming a more proficient and efficient Python programmer.

The technical interview for a Python Developer position in the IT industry is a key stage in the recruitment process. To increase your chances of success, it is a good idea to prepare well for the interview, especially if you aspire to be a Junior Python Developer. Sample recruitment questions may cover many more Python-related topics, such as Python bubble sorting, file operations or the use of the anonymous lambda function in Python.


You may also be interested in

© 2024 MockIT