Planet Python
Last update: May 18, 2025 04:42 AM UTC
May 17, 2025
The Python Coding Stack
The Chores Rota (#3 in The `itertools` Series • `cycle()` and Combining Tools)
"It's your turn to take the bins out."
"No way, I washed up the dishes today, and vacuumed the rugs yesterday."
"But…"
And on and on it went. Yteria and her flatmate, Silvia, had these arguments every day. Yteria was hoping she'd be able to move soon—move to a new area and to a flat she didn't have to share with Silvia…or anyone else.
"Right. Let me set up a rota and then we'll follow it strictly", and Yteria got straight to work.
It had been several weeks since Yteria had lost the word "for". For this world is a bit different to yours and mine. People can lose words through some mishap or nefarious means. And if you lose a word, you can't speak it, you can't write it, you can't type it. You still know it's there somewhere, that it exists in the language, but you can't use it.
You can follow Yteria's origin story, how she lost the word "for" and the challenges she faced when programming here: The ‘itertools’ Series.
It’s unlikely you care, but I’ll tell you anyway. I launched a new publication last week. But it’s completely unrelated to Python and it’s unlikely there will be much overlap between the two audiences. Still, if you want to follow my ‘back to the future’ journey, here’s the first post that introduces the publication: Back on the Track • The 25-Year Gap • #1
Creating Infinite Iterables from Finite Ones
Yteria set up two lists, one with the chores and another with Silvia's name and her own:

Next, she wanted to write code to convert these lists into infinite sequences by repeating the contents of the lists forever:
But then she stopped.
Yteria had been programming without the ability to use the word "for" for several weeks by now. And she had discovered the itertools
module in Python's standard library. This module came to her rescue on several occasions.
And there it was: itertools.cycle()
. It was the perfect tool for what she needed:
The function itertools.cycle()
accepts any iterable and returns an iterator that will keep yielding items from the original iterable, restarting from the beginning each time it reaches the end.
If you want to brush up on the difference between iterable and iterator, you can read the following articles:
Iterable: Python's Stepping Stones • (Data Structure Categories #1)
A One-Way Stream of Data • Iterators in Python (Data Structure Categories #6)
But before we move on, let's still write the create_infinite_sequence()
function Yteria was about to write. A version of this function could be as follows:
This function includes a yield
rather than a return
. Therefore, this is a generator function. Calling this function creates a generator. You can read more about generators in this article: Pay As You Go • Generate Data Using Generators (Data Structure Categories #7)
A generator created from this generator function starts with index
equal to 0
and, therefore, starts by yielding the first element in the sequence. Next time, it yields the second, and so on. However, the final line in the function definition uses a conditional expression to reset the index to zero whenever it reaches the end of the sequence.
So, for a list with three elements, such as tasks
, here are the first few steps:
The generator starts with
index
equal to0
, yields the first element, then incrementsindex
to1
. The increment happens in the conditional expression. Note how the third operand in the conditional expression—the one after theelse
—isindex + 1
.Since
index
is now1
, the generator yields the second element and incrementsindex
to2
.When the generator yields
sequence[2]
, the conditional expression resetsindex
to0
sinceindex
, which is2
, is equal tolen(sequence) - 1
.The generator then yields the first element of the sequence and the whole process repeats itself.
Let's confirm that this gives the same output as itertools.cycle()
:
So, does it matter which option you choose?
Yes, it does.
First of all, once you know about itertools.cycle()
, it's much easier and quicker to use it than to write your own function. It also makes your code more readable for anyone who's aware of itertools.cycle()
—and even if they're not, the function name gives a good clue to what it does.
A second advantage of using itertools.cycle()
is that it works with any iterable. The create_infinite_sequence()
generator function only works with sequences. A sequence is an ordered collection in which you can use integers as indices to fetch data based on the order of the elements in the sequence. You can read more about sequences here: Sequences in Python (Data Structure Categories #2)
In Python, all sequences are iterable, but not all iterables are sequences. For example, dictionaries are iterable but they're not sequences. Therefore, itertools.cycle()
can be used on a larger group of data types than create_infinite_sequence()
.
And finally, there's another really good reason to use itertools.cycle()
instead of a homemade function:
You create two iterators. The first one, infinite_tasks
, is the generator you get from the generator function create_infinite_sequence()
. Note that all generators are iterators.
The second iterator is infinite_tasks_cyc
, which is the iterator that itertools.cycle()
returns. All the tools in itertools
return iterators.
Finally, you time how long it takes to get the first 10 million elements from each of these infinite iterators. Here's the output I got on my computer—your timings may vary:
Using 'create_infinite_sequence()':
0.753838583000288
Using 'itertools.cycle()':
0.19026683299944125
It's much quicker to use itertools.cycle()
. Sure, you may have ideas on writing a more efficient algorithm than the one I used in create_infinite_sequence()
. Go ahead, I'm sure you'll be able to do better than create_infinite_sequence()
. But can you do better than itertools.cycle()
?
Do you want to join a forum to discuss Python further with other Pythonistas? Upgrade to a paid subscription here on The Python Coding Stack to get exclusive access to The Python Coding Place's members' forum. More Python. More discussions. More fun.
And you'll also be supporting this publication. I put plenty of time and effort into crafting each article. Your support will help me keep this content coming regularly and, importantly, will help keep it free for everyone.
Creating the Rota • Combining Tools Using 'Iterator Algebra'
So, Yteria used itertools.cycle()
to create two infinite iterators: one for tasks
and the other for people
. Note that the original lists, tasks
and people
, don't have the same number of elements.
Next, Yteria needed to find a way to connect these two infinite iterators so that corresponding elements are matched. She needed a way to progress through the two infinite iterators at the same time. She needed something to "glue" them together…
…or better still, to "zip" them together.
This is where zip()
comes in. The zip()
built-in tool takes a number of iterators and zips them together, grouping the first elements of each iterator together, then grouping the second elements of each iterator together, and so on:
And there it is. Remember that rota
is an iterator since zip()
returns an iterator. So, each time you fetch the next value from the rota
iterator, you'll get a pairing between a person and the chore they need to do.
Yteria finished this off with some quick code to display each day's rota. It would have been easier to use a for
loop, but she couldn't. So she opted for this option, which is less tidy but still works:
You can write the easier for
loop version if you prefer. Note how Yteria, who's now proficient with the itertools
module, also used itertools.count()
to create a counter! She could have just created an integer and increment it each time, of course.
Side note: The while
loop above feels like something that could be implemented with the help of some itertools
tools. Yteria felt this way, too. She wrote a note to try to refactor this while
loop later, even if just as an exercise in playing with more of the tools in itertools
. Do you want to have a go, too? If Yteria gets round to replacing this code, I'll let you know in a future post in The 'itertools' Series.
Here's the output from this code for the first few days:
Press enter for the next day's rota...
Day 1:
It's Yteria's turn to take the bins out
It's Silvia's turn to clean floor and carpets
It's Yteria's turn to wash up
Press enter for the next day's rota...
Day 2:
It's Silvia's turn to take the bins out
It's Yteria's turn to clean floor and carpets
It's Silvia's turn to wash up
Press enter for the next day's rota...
Day 3:
It's Yteria's turn to take the bins out
It's Silvia's turn to clean floor and carpets
It's Yteria's turn to wash up
Press enter for the next day's rota...
Day 4:
It's Silvia's turn to take the bins out
It's Yteria's turn to clean floor and carpets
It's Silvia's turn to wash up
Press enter for the next day's rota...
And of course, this code works with any number of tasks and any number of people.
The itertools
documentation page has a great line about combining various iteration tools using 'iterator algebra'. Yteria's solution is an example of this. It combines two iteration tools, zip()
and cycle()
, to provide a neat solution. The tools in itertools
are often useful as standalone tools. But they're even more powerful when you combine them with each other.
Note that zip()
and enumerate()
aren't part of itertools
since they're both built-in callables. However, they fall in the same category as the other tools in itertools
—they're tools to help in particular iteration tasks.
Final Words
Problem solved. Yteria and Silvia could now share the daily chores and make sure that everyone contributes equally. Yteria felt that her forced abstention from using the for
keyword in Python led her to understand Pythonic iteration a lot better. She felt like an iteration pro now! Iterators are at the heart of iteration in Python. And itertools provides lots of useful iterators.
Code in this article uses Python 3.13
The code images used in this article are created using Snappify. [Affiliate link]
You can also support this publication by making a one-off contribution of any amount you wish.
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Further reading related to this article’s topic:
Iterable: Python's Stepping Stones • (Data Structure Categories #1)
A One-Way Stream of Data • Iterators in Python (Data Structure Categories #6)
Pay As You Go • Generate Data Using Generators (Data Structure Categories #7)
If You Find if..else in List Comprehensions Confusing, Read This, Else…
Appendix: Code Blocks
Code Block #1
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
Code Block #2
def create_infinite_sequence(sequence):
...
Code Block #3
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
import itertools
tasks_cyc = itertools.cycle(tasks)
next(tasks_cyc)
# 'Take the bins out'
next(tasks_cyc)
# 'Clean floor and carpets'
next(tasks_cyc)
# 'Wash up'
next(tasks_cyc)
# 'Take the bins out'
next(tasks_cyc)
# 'Clean floor and carpets'
next(tasks_cyc)
# 'Wash up'
Code Block #4
def create_infinite_sequence(sequence):
index = 0
while True:
yield sequence[index]
index = 0 if index == len(sequence) - 1 else index + 1
Code Block #5
tasks_inf_seq = create_infinite_sequence(tasks)
next(tasks_inf_seq)
# 'Take the bins out'
next(tasks_inf_seq)
# 'Clean floor and carpets'
next(tasks_inf_seq)
# 'Wash up'
next(tasks_inf_seq)
# 'Take the bins out'
next(tasks_inf_seq)
# 'Clean floor and carpets'
next(tasks_inf_seq)
# 'Wash up'
Code Block #6
import itertools
import timeit
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
def create_infinite_sequence(sequence):
index = 0
while True:
yield sequence[index]
index = 0 if index == len(sequence) - 1 else index + 1
infinite_tasks = create_infinite_sequence(tasks)
infinite_tasks_cyc = itertools.cycle(tasks)
print(
"Using 'create_infinite_sequence()':\n",
timeit.timeit(
"next(infinite_tasks)",
number=10_000_000,
globals=globals(),
)
)
print(
"Using 'itertools.cycle()':\n",
timeit.timeit(
"next(infinite_tasks_cyc)",
number=10_000_000,
globals=globals(),
)
)
Code Block #7
import itertools
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
rota = zip(
itertools.cycle(people),
itertools.cycle(tasks),
)
Code Block #8
import itertools
tasks = ["Take the bins out", "Clean floor and carpets", "Wash up"]
people = ["Yteria", "Silvia"]
rota = zip(
itertools.cycle(people),
itertools.cycle(tasks),
)
day_counter = itertools.count(start=1)
while True:
input("\nPress enter for the next day's rota...")
day = next(day_counter)
print(f"Day {day}:")
# The next bit would be easier using a 'for' loop,
# but Yteria couldn't do this!
while True:
person, task = next(rota)
print(f"It's {person}'s turn to {task.lower()}")
if task == tasks[-1]:
break
For more Python resources, you can also visit Real Python—you may even stumble on one of my own articles or courses there!
Also, are you interested in technical writing? You’d like to make your own writing more narrative, more engaging, more memorable? Have a look at Breaking the Rules.
And you can find out more about me at stephengruppetta.com
Nikola
Nikola v8.3.3 is out!
On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v8.3.3. This is a bugfix release.
We’ve had to release Nikola v8.3.3 immediately after releasing Nikola v8.3.2, as it is broken on Python 3.8. We would like to thank the Python packaging ecosystem for being an incomprehensible and incompatible trainwreck.
What is Nikola?
Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).
Find out more at the website: https://getnikola.com/
Downloads
Install using pip install Nikola
.
Changes from v8.3.1
Features
Trace template usage when an environment variable
NIKOLA_TEMPLATES_TRACE
is set to any non-empty value.Give user control over the raw underlying template engine (either
mako.lookup.TemplateLookup
orjinja2.Environment
) via an optionalconf.py
methodTEMPLATE_ENGINE_FACTORY
.Switch to pyproject.toml
Add path handler
slug_source
linking to source of post.
Bugfixes
Ignore errors in parsing SVG files for shrinking them, copy original file to output instead (Issue #3785)
Restore
annotation_helper.tmpl
with dummy content - fix themes still mentioning it (Issue #3764, #3773)Fix compatibility with watchdog 4 (Issue #3766)
nikola serve
now works with non-root SITE_URL.Stack traces meaningless for end users now more reliably suppressed (Issue #3838).
Other
Changed filter for tidy from
tidy5
totidy
.
PyCon
Welcoming 8 Companies to Startup Row at PyCon US 2025
PyCon US gives the community a chance to come together and learn about what’s new and interesting about the Python language and the seemingly infinite variety of problems that can be solved with a few (or a few thousand) lines of Python code. For entrepreneurial Pythonistas, Startup Row at PyCon US presents a unique opportunity for startup companies to connect directly with the developer community they’re building for.
Kicked off in 2011, Startup Row at PyCon US gives early-stage startups access to the best of what PyCon US has to offer, including conference passes and booth space, at no cost to their teams. Since its inception, including this year’s batch, well over 150 companies have been featured on Startup Row, and there’s a good chance you are familiar with some of their products and projects. Pandas, Modin, Codon, Ludwig, Horovod, SLSA, and dozens of other open-source tools were built or commercialized by companies featured on Startup Row at PyCon US.
Think of Startup Row at PyCon US as a peek into the future of the Python software ecosystem. And with that, we’re pleased to introduce the 2025 batch!
The Startup Row 2025 Lineup
AgentOps
Building an AI agent that works is only half the battle; seeing why it fails, how much it costs, and whether it’s about to go rogue is the other half. AgentOps gives developers that missing x-ray vision. Drop a two-line SDK into your code and every run is captured as a “session” complete with step-by-step waterfalls, prompt/response pairs, cost and token metrics, and security flags—instantly viewable in a web dashboard.
The idea was born at a San Francisco hackathon, where co-founders Alex Reibman and Adam Silverman discovered that their agent-debugging tools were more popular than the agents themselves. They turned those internal tools into AgentOps, raised a $2.6 million pre-seed led by 645 Ventures and Afore Capital in August 2024, and now give thousands of AI engineers a live dashboard that replays every agent step, surfaces exact cost and latency metrics, and enforces benchmark-driven safety checks—all from a two-line SDK.
Open-sourced under an MIT license, the project has already racked up 4.4k GitHub stars and integrates out-of-the-box with OpenAI Agents SDK, CrewAI, LangChain, AutoGen and dozens of other frameworks. With observability handled, AgentOps wants to be to autonomous agents what Datadog is to micro-services: the layer that makes ambitious agent stacks safe enough for production—and cheap enough to keep running.
AllHands AI
Agentic coding went from a theoretical possibility to something seemingly overnight, and All Hands AI’s open-source platform OpenHands is one of the reasons why. Written in Python (with a JavaScript front-end), OpenHands lets an AI developer do everything a human can: edit repositories, run shell commands, browse the web, call APIs—even lift snippets straight from Stack Overflow—and then roll it all back into a commit you can review and merge.Since its first README-only commit just 14 months ago, the project has snowballed into 54k-plus GitHub stars and 6k forks, backed by a community of roughly 310 contributors and counting. The momentum helped the team close a $5 million seed round led by Menlo Ventures last September, giving the ten-person startup runway to layer commercial tooling on top of its permissively-licensed core.
“About six months ago it finally clicked—I now write about 95% of my own code with agents,” says co-founder and chief scientist Graham Neubig, an associate professor at Carnegie Mellon who shipped the project’s first lines before Robert Brennan—now CEO—joined the project and built a globally-distributed team to scale it up. Neubig credits the early decision to ship a “non-functional prototype” and build in public for catalyzing the contributor base; today, community members maintain everything from Windows support to protocol bridges while swapping LLM benchmarks daily in the project’s Slack.
OpenHands has evolved from a weekend proof-of-concept into a community-driven framework that now aims for production-grade reliability as an open alternative to proprietary code agents. Weekly releases focus on reproducible debugging, cost control, and enterprise safeguards, and contributors are already using the system to generate and review real pull requests across a growing set of Python projects.
DiffStudio
Product photos tell a story, but DiffStudio wants to let shoppers walk around that story. The North-Jersey startup is building a camera-agnostic “inverse graphics” pipeline that ingests a handful of ordinary 2-D shots or video and returns a fully-textured, web-ready 3-D model that drops into any product page. The goal is simple: turn scrolling into spinning, pinching, and zooming—and watch engagement and conversions rise.Founder Naga Karumuri just recently formed the company in December, after months of hacking on the latest developments in Gaussian splatting and differentiable rendering. “You upload a batch of images, and our model hands you a compressed asset—think megabytes, not gigabytes—that Shopify can serve instantly,” Karumuri explained. A companion mobile app will let merchants scan products on the fly, while a web dashboard handles cloud processing and one-click embeds.
DiffStudio’s beachhead market is small- and mid-sized Shopify sellers, and blue-chip retailers are already circling. “In casual chats we’ve had interest from brands like Adidas and Michael Kors,” Karumuri noted, hinting at an eventual move up-market once the self-service MVP launches. Compression and quality are the differentiators: where existing tools like Polycam focus on hobbyist scans or LiDAR-assisted captures, DiffStudio is chasing photo-real fidelity with file sizes that won’t tank page speed. The project’s GitHub repositories showcase early demos and the startup’s open-source commitment.
The team is still lean—Karumuri plus a collaborator—but the vision is outsized: make 3-D product “digital twins” as easy to generate as a product photo set. Or, as their LinkedIn banner puts it, “Splat your products into 3D glory.”
Fabi.ai
Business users shouldn’t have to ping the data team for every ad-hoc question—and data scientists shouldn’t spend half their day writing the same queries on repeat. Fabi.ai positions itself as the AI “side-kick” that lets both camps meet in the middle: a web notebook where natural-language prompts, SQL, Python, and no-code building blocks live side-by-side, with generative agents filling in (and explaining) 90 % of the boilerplate.Founded in 2023 and headquartered in the San-Francisco Bay Area, the six-person team works face-to-face in San Mateo to iterate quickly on the product. CEO Marc Dupuis ran embedded analytics at revenue-ops unicorn Clari before teaming up again with former colleague Lei Tang (now CTO) to “let vibe-coders do 95 % of their own analysis” while still giving experts an easy way to supervise the last mile.
Eniac Ventures and Outlander VC co-led a $3 million seed round in July 2023 to bring Fabi.ai’s collaborative notebook to market. Early customers already range from fast growing startups to established e-commerce brands.
With BI dashboards stuck on the what and legacy notebooks siloed on individual laptops, Fabi.ai is betting that a cloud-native, agent-augmented workspace is the missing link—and it’s inviting the Python community to kick the tires (and write fewer queries) at PyCon US.
Gooey.ai
Most no-code AI builders stop at slick demos; Gooey.ai is obsessed with what happens after the hype, when a multilingual chatbot has to work for a Kenyan farmer with a 2 G signal or a frontline nurse switching between English and Kannada. The open-source, low-code platform stitches together the “best of private and public AI” into reusable workflows—text, speech, vision and RAG pipelines you can fork, remix and ship to WhatsApp, SMS, Slack or the web from a single dashboard. One billing account, one-click deploy.Founders Sean Blagsvedt (ex-Microsoft Research, founder of Indian job-matching startup Babajob), Archana Prasad (artist-turned-social-tech entrepreneur), and CTO Dev Aggarwal split their time between Seattle and Bangalore and run the company under the umbrella of Dara Network. Their thesis: impactful AI needs to be both affordable and local—so Gooey bakes in speech recognition for low-resource languages, translation APIs like India’s Bhashini, and zero-data-retention options for NGOs handling sensitive chats.
Real-world traction is already visible. An agronomy WhatsApp bot built on Gooey reached “tens of thousands of farmers in Kenya, India, Ethiopia and Rwanda,” delivering accurate, objective answers with page-level citations. The platform’s copilot builder now supports the latest GPT-4o, Llama 3, Claude, Gemini and Mistral models; integrates OCR, vision and text-to-speech; and ships bulk evaluation harnesses so teams can test new prompts before they hit production.
To seed more grassroots projects, Gooey recently launched a Workflow Accelerator with funding from The Rockefeller Foundation, covering model and SMS costs for six NGOs and open-sourcing every workflow that emerges. If you’re looking to take an AI pilot from “cool demo” to “24/7 field tool,” Gooey.ai wants to be the glue—and the infra—you won’t outgrow.
GripTape AI
Enterprise AI teams love the idea of autonomous agents, but hate the roulette wheel of prompt-only code. Griptape steps in with a Python framework that keeps creativity where it belongs—inside LLM calls—while wrapping every outside step in predictable, testable software patterns. Agents, sequential pipelines, and parallel workflows are first-class “Structures”; memory, rulesets, and observability are plug-in Drivers; and an “Off-Prompt” mechanism pushes sensitive or bulky data out of the prompt for lower cost and higher security.The project launched in early 2023 and has already gathered ≈2.3 k GitHub stars and an active Discord community. Adoption accelerated after co-founders Kyle Roche and Vasily Vasinov—both former AWS leaders—closed a $12.5 million Seed Round in September 2023 led by Seattle’s FUSE and Acequia Capital. The fresh capital funds Griptape Cloud, a fully managed runtime that hosts ETL pipelines, hybrid vector knowledge bases, and structure executions while piping metrics to whatever monitoring stack a Fortune 500 already uses.
Under the Apache-2.0 license, developers can start locally, swap between OpenAI, Bedrock or Anthropic drivers, and graduate to the cloud only when they need auto-scaling or policy enforcement. In short, Griptape aims to be the Django of agentic AI: batteries-included, prod-ready, and with enough guardrails that even the compliance team can sleep at night.
Griptape also recently launched Griptape Nodes, an intuitive, drag-and-drop interface where designers, artists and other creative professionals can create advanced creative pipelines using graphs, nodes, and flowcharts to exploit state-of-the-art image generation and image processing models, together with more “traditional” large language models.
MLJAR
Most AutoML platforms lock you into a browser tab and someone else’s GPU cluster. MLJAR takes the opposite approach: everything runs locally, yet you still get the “train, explain, and deploy” cycle in a single click.The Polish-based project began in 2016, when founder Piotr Płoński—fresh from a PhD spent building models for physicists, bioinformaticians, and telecom giants���decided he was tired of rewriting the same pipelines over and over. Impatience, not laziness, pushed him to automate the entire workflow.
Today the three-person team (Piotr, his co-founder wife, and a close friend) maintains a fully open-source stack. The flagship MLJAR-AutoML package handles feature engineering, hyper-parameter search, and rich Markdown reports; Mercury turns any Jupyter notebook into a shareable web app or dashboard with a sprinkle of widgets; and the brand-new MLJAR Studio Desktop app bundles its own Python environment, point-and-click “code recipes,” an integrated GPT-4 assistant, and a one-button Share that converts a notebook into a live web application.
Open source is more than a distribution strategy—it’s a trust signal. One recognisable enterprise adopted the package under an MIT license and then contracted the team for advanced features such as fairness-aware training. Revenue is a side effect; the primary goal is software that makes data science faster, friendlier, and fully under the user’s control.
If you’ve ever wished Streamlit met AutoML—and ran natively on your laptop—swing by the MLJAR booth on Startup Row at PyCon US and take Studio for a spin.
Ragas
Seemingly everyone is building RAG pipelines, but almost no one is measuring them. Ragas sets out to be “pytest for Retrieval-Augmented Generation,” bundling ready-made metrics—context recall, faithfulness, answer relevancy—and auto-generated test sets so teams can turn vibe checks into repeatable CI tests. Drop the library into LangChain, LlamaIndex, or plain-Python code and Ragas spits out a single “Ragas Score” (plus per-metric drill-downs) that tracks whether your latest prompt tweak fixed accuracy or broke it.The project landed a shout-out during OpenAI’s Dev Day and has since snowballed to 9.1 k GitHub stars and 900+ forks, with more than 80 external contributors. In production it now processes ~5 million evaluations a month for engineers at AWS, Microsoft, Databricks, and Moody’s—a number growing 70 % month-over-month.
Co-founders Jithin James (early engineer at BentoML) and Shahul ES (Kaggle Grandmaster, core contributor to Open-Assistant) met at college, hacked on open-source together for years, and entered Y Combinator’s W24 batch to turn their weekend project into a commercial platform. Their plan: keep the core evaluator MIT-licensed while DG Labs, the commercial arm, layers team dashboards, experiment tracking, and dataset management on top—so every product squad can ship RAG updates with CI-style confidence.
Thank You’s and Acknowledgements
There are far too many stakeholders in the ongoing success of Startup Row at PyCon US to name individually, but this program would not be possible without the following folks:- The Python Software Foundation, for its continued support of this little corner of PyCon US.
- The PSF Sponsorship team, for managing the logistics of getting everyone registered and set up for success
- Startup Row co-organizers, Jason D. Rowley (p.s. hey, that's me!) and and collaborator, Shea Tate-Di Donna, whose first experience with the Python community was presenting her company, Zana, on Startup Row at PyCon US 2015.
- Startup Row alumni companies that come back as paid sponsors at PyCon US. Shoutouts to Anvil (SR’17), Chainguard (SR’22), and Dagster (SR’21), whose support helps make Startup Row at PyCon US possible.
- To all startup founders who filled out the (mercifully brief) application. To those that did not get a spot this year, we appreciate your time and attention. To those that did: a hearty congratulations.
- To the selection committee, for accomplishing the difficult task of evaluating and scoring applications.
May 16, 2025
Real Python
The Real Python Podcast – Episode #249: Going Beyond requirements.txt With pylock.toml and PEP 751
What is the best way to record the Python dependencies for the reproducibility of your projects? What advantages will lock files provide for those projects? This week on the show, we welcome back Python Core Developer Brett Cannon to discuss his journey to bring PEP 751 and the pylock.toml file format to the community.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
Our Google Summer of Code 2025 contributors
We’re excited to introduce our Google Summer of Code 2025 contributors!
These amazing folks will be working on impactful projects that will shape Django’s future.\ Meet the contributors 👇
A. Rafey Khan
Project: Django Admin – Add Keyboard Shortcuts & Command Palette. Mentors: Tom Carrick, Apoorv Garg
Rafey will work on making Django Admin faster and more accessible through keyboard-driven workflows. Excited to see this land!
Farhan Ali Raza
Project: Bring django-template-partials into core. Mentor: Carlton Gibson
Farhan will be enhancing Django’s template system by adding first-class support for partials—making componentized templates easier than ever.\
Saurabh K
Project: Automate processes within Django’s contribution workflow. Mentor: Lily Foote
Saurabh will work on streamlining how contributors interact with Django repo—automating repetitive tasks and improving dev experience for all. \ A huge shoutout to our mentors (and Org Admin Bhuvnesh Sharma) and the broader Django community for supporting these contributors! 💚\ \ Let’s make this a summer of learning, building, and collaboration.
Daniel Roy Greenfeld
Farewell to Michael Ryabushkin
Michael Ryabushkin and I met around 2011-2012 through Python community work. I don't remember how we met, instead I remember his presence suddenly there, helping and aiding others.
Michael could be pushy. He was trying to help people reach their full potential. His energy and humor was relentless, I admired his tenacity and giving nature.
While our coding preferences usually clashed, sometimes they matched. Then we would rant together about some tiny detail, those talks plus the silly Tai Chi dance we did are lovely memories I have of Michael.
In 2016 my wife Audrey had emergency surgery. For me that meant sleepless days taking care of her. Suddenly Michael's presence was there. He took shifts, ran errands (including buying a wheelchair), and forced me to sleep. I am forever grateful to Michael for what he did for us.
In early 2020 Audrey and I got last minute approval to use a large conference space to organize an event called PyBeach. Michael heard about it and as always, suddenly his presence was there. He was not just a volunteer at large, but leading the conference with us. Michael and I had our shared code rants, did our silly Tai Chi dance, and he met our baby daughter.
Between the pandemic and us moving from the Los Angeles area I didn't get the chance to see Michael again. I'll miss our rants, our silly Tai Chi dance, and his sudden appearances.
SoCal Python has created a memorial page in Michael's honor.
Brett Cannon
Unravelling t-strings
PEP 750 introduced t-strings for Python 3.14. In fact, they are so new that as of Python 3.14.0b1 there still isn&apost any documentation yet for t-strings. 😅 As such, this blog post will hopefully help explain what exactly t-strings are and what you might use them for by unravelling the syntax and briefly talking about potential uses for t-strings.
What are they?
I like to think of t-strings as a syntactic way to expose the parser used for f-strings. I&aposll explain later what that might be useful for, but for now let&aposs see exactly what t-strings unravel into.
Let&aposs start with an example by trying to use t-strings to mostly replicate f-strings. We will define a function named f_yeah()
which takes a t-string and returns what it would be formatted had it been an f-string (e.g. f"{42}" == f_yeah(t"{42}")
). Here is the example we will be working with and slowly refining:
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
return t_string
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
actual = f_yeah(expected)
assert actual == expected
As of right now, f_yeah()
is just the identity function which takes the actual result of an f-string, which is pretty boring and useless. So let&aposs parse what the t-string would be into its constituent parts:
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
return "".join(t_string)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
"world",
"! Conversions like ",
"&aposworld&apos",
" and format specs like ",
"world ",
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
Here we have split the f-string output into a list of the string parts that make it up, joining it all together with "".join()
. This is actually what the bytecode for f-strings does once it has converted everything in the replacement fields – i.e. what&aposs in the curly braces – into strings.
But this is still not that interesting. We can definitely parse out more information.
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
return "".join(t_string)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
name,
"! Conversions like ",
repr(name),
" and format specs like ",
format(name, "<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
Now we have substituted the string literals we had for the replacement fields with what Python does behind the scenes with conversions like !r
and format specs like :<6
. As you can see, there are effectively three parts to handling a replacement field:
- Evaluating the Python expression
- Applying any specified conversion (let&aposs say the default is
None
) - Applying any format spec (let&aposs say the default is
""
)
So let&aposs get our "parser" to separate all of that out for us into a tuple of 3 items: value, conversion, and format spec. That way we can have our f_yeah()
function handle the actual formatting of the replacement fields.
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case (value, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
(name, None, ""),
"! Conversions like ",
(name, "r", ""),
" and format specs like ",
(name, None, "<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
Now we have f_yeah()
taking the value from the expression of the replacement field, applying the appropriate conversion, and then passing that on to format()
. This gives us a more useful parsed representation! Since we have the string representation of the expression, we might as well just keep that around even if we don&apost use it in our example (parsers typically don&apost like to throw information away).
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case (value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
(name, "name", None, ""),
"! Conversions like ",
(name, "name", "r", ""),
" and format specs like ",
(name, "name", None, "<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
The next thing we want our parsed output to be is be a bit easier to work with. A 4-item tuple is a bit unwieldy, so let&aposs define a class named Interpolation
that will hold all the relevant details of the replacement field.
class Interpolation:
__match_args__ = ("value", "expression", "conversion", "format_spec")
def __init__(
self,
value,
expression,
conversion=None,
format_spec="",
):
self.value = value
self.expression = expression
self.conversion = conversion
self.format_spec = format_spec
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case Interpolation(value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = [
"Hello, ",
Interpolation(name, "name"),
"! Conversions like ",
Interpolation(name, "name", "r"),
" and format specs like ",
Interpolation(name, "name", format_spec="<6"),
" work!",
]
actual = f_yeah(parsed)
assert actual == expected
That&aposs better! Now we have an object-oriented structure to our parsed replacement field, which is easier to work with than the 4-item tuple we had before. We can also extend this object-oriented organization to the list we have been using to hold all the parsed data.
class Interpolation:
__match_args__ = ("value", "expression", "conversion", "format_spec")
def __init__(
self,
value,
expression,
conversion=None,
format_spec="",
):
self.value = value
self.expression = expression
self.conversion = conversion
self.format_spec = format_spec
class Template:
def __init__(self, *args):
# There will always be N+1 strings for N interpolations;
# that may mean inserting an empty string at the start or end.
strings = []
interpolations = []
if args and isinstance(args[0], Interpolation):
strings.append("")
for arg in args:
match arg:
case str():
strings.append(arg)
case Interpolation():
interpolations.append(arg)
if args and isinstance(args[-1], Interpolation):
strings.append("")
self._iter = args
self.strings = tuple(strings)
self.interpolations = tuple(interpolations)
@property
def values(self):
return tuple(interpolation.value for interpolation in self.interpolations)
def __iter__(self):
return iter(self._iter)
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case Interpolation(value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = Template(
"Hello, ",
Interpolation(name, "name"),
"! Conversions like ",
Interpolation(name, "name", "r"),
" and format specs like ",
Interpolation(name, "name", format_spec="<6"),
" work!",
)
actual = f_yeah(parsed)
assert actual == expected
And that&aposs t-strings! We parsed f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
into Template("Hello, ", Interpolation(name, "name"), "! Conversions like ", Interpolation(name, "name", "r"), " and format specs like ", Interpolation(name, "name", format_spec="<6")," work!")
. We were then able to use our f_yeah()
function to convert the t-string into what an equivalent f-string would have looked like. The actual code to use to test this in Python 3.14 with an actual t-string is the following (PEP 750 has its own version of converting a t-string to an f-string which greatly inspired my example):
from string import templatelib
def f_yeah(t_string):
"""Convert a t-string into what an f-string would have provided."""
converters = {func.__name__[0]: func for func in (str, repr, ascii)}
converters[None] = str
parts = []
for part in t_string:
match part:
case templatelib.Interpolation(value, _, conversion, format_spec):
parts.append(format(converters[conversion](value), format_spec))
case str():
parts.append(part)
return "".join(parts)
if __name__ == "__main__":
name = "world"
expected = f"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
parsed = t"Hello, {name}! Conversions like {name!r} and format specs like {name:<6} work!"
actual = f_yeah(parsed)
assert actual == expected
What are t-strings good for?
As I mentioned earlier, I view t-strings as a syntactic way to get access to the f-string parser. So, what do you usually use a parser with? The stereotypical thing is compiling something. Since we are dealing with strings here, what are some common strings you "compile"? The most common answer are things like SQL statements and HTML: things that require some processing of what you pass into a template to make sure something isn&apost going to go awry. That suggests that you could have a sql()
function that takes a t-string and compiles a SQL statement that avoids SQL injection attacks. Same goes for HTML and JavaScript injection attacks.
Add in logging and you get the common examples. But I suspect that the community is going to come up with some interesting uses of t-strings and their parsed data (e.g. PEP 787 and using t-strings to create the arguments to subprocess.run()
)!
May 15, 2025
First Institute of Reliable Software
New Template Strings in Python 3.14
Template strings (template strings or t-strings) are a new syntax in Python 3.14 that defers interpolation. Explanation, examples, and how to mask secret data when outputting. How to install Python 3.14 to test the new functionality.
Django Weblog
Our new accessibility statement
Happy Global Accessibility Awareness Day! We thought this would be a fitting occasion to announce our brand new Django accessibility statement 🎉
Did you know that according to the WebAIM Million survey, 94.6% of sites have easily-detectable accessibility issues? We all need to work together to build a more inclusive web (also check out our diversity statement if you haven’t already!). There are accessibility gaps in Django itself too. This statement improves transparency, and clearly states our intentions. And we hope it encourages our community and the industry at large to more widely consider accessibility.
How to use this statement
Read it, share it with your friends, or in a procurement context!
- Use it to understand where there are gaps in Django that need to be addressed on projects.
- And opportunities to contribute to Django and related projects ❤️
- Factor it into legal compliance. For example with the European Accessibility Act. Starting June 2025, accessibility becomes a legal requirement for large swaths of the private sector in the European Union.
- Share it with venues for Django events to demonstrate the importance of accessibility for their competitiveness.
How you can help
Take a moment to provide any feedback you might have about the statement on the Django Forum. Let us know if you would prefer additional reporting like an ATAG audit, or VPAT, ACR, or any other acronym. Let us know if you’d like to contribute to the accessibility of the Django community! 🫶
Ned Batchelder
PyCon summer camp
I’m headed to PyCon today, and I’m reminded about how it feels like summer camp, in mostly good ways, but also in a tricky way.
You take some time off from your “real” life, you go somewhere else, you hang out with old friends and meet some new friends. You do different things than in your real life, some are playful, some take real work. These are all good ways it’s like summer camp.
Here’s the tricky thing to watch out for: like summer camp, you can make connections to people or projects that are intense and feel like they could last forever. You make friends at summer camp, or even have semi-romantic crushes on people. You promise to stay in touch, you think it’s the “real thing.” When you get home, you write an email or two, maybe a phone call, but it fades away. The excitement of the summer is overtaken by your autumnal real life again.
PyCon can be the same way, either with people or projects. Not a romance, but the exciting feeling that you want to keep doing the project you started at PyCon, or be a member of some community you hung out with for those days. You want to keep talking about that exciting thing with that person. These are great feelings, but it’s easy to emotionally over-commit to those efforts and then have it fade away once PyCon is over.
How do you know what projects are just crushes, and which are permanent relationships? Maybe it doesn’t matter, and we should just get excited about things.
I know I started at least one effort last year that I thought would be done in a few months, but has since stalled. Now I am headed back to PyCon. Will I become attached to yet more things this time? Is that bad? Should I temper my enthusiasm, or is it fine to light a few fires and accept that some will peter out?
Zato Blog
Using Oracle Database from Python and Zato Services
Using Oracle Database from Python and Zato Services
Overview
Oracle Database remains a cornerstone of enterprise IT, powering mission-critical applications around the world. Integrating Oracle with Python unlocks automation, reporting, and API-based workflows. In this article, you'll learn how to:
- Connect to Oracle Database from Python
- Use Oracle Database in Zato services
- Execute SQL queries and call stored procedures
- Understand the underlying SQL objects
All examples are based on real-world use cases and follow best practices for security and maintainability.
Why Use Oracle Database from Python?
Python is a popular language for automation, integration, and data processing. By connecting Python to Oracle Database, you can:
- Automate business processes
- Build APIs that interact with enterprise data
- Run analytics and reporting jobs
- Integrate with other systems using Zato
Using Oracle Database in Zato Services
SQL connections are configured in the Dashboard, and you can use them directly in your service code.
In all the service below, the logic is split into several dedicated services, each responsible for a specific operation. This separation improves clarity, reusability, and maintainability.
Setting Up: Oracle Database Objects
First, let's start with the basic SQL objects used in our examples:
-- Sample data
INSERT INTO users (user_id, username) VALUES (1, 'john_doe');
INSERT INTO users (user_id, username) VALUES (2, 'jane_smith');
INSERT INTO users (user_id, username) VALUES (3, 'bob_jones');
-- Stored procedure: process_data
CREATE OR REPLACE PROCEDURE process_data (
input_num IN NUMBER,
input_str IN VARCHAR2,
output_num OUT NUMBER,
output_str OUT VARCHAR2
)
AS
BEGIN
output_num := input_num * 2;
output_str := 'Input was: ' || input_str;
END process_data;
/
-- Stored procedure: get_users
CREATE OR REPLACE PROCEDURE get_users (
recordset OUT SYS_REFCURSOR
)
AS
BEGIN
OPEN recordset FOR
SELECT user_id, username
FROM users
ORDER BY user_id;
END get_users;
/
1. Querying All Users
This service retrieves all users from the users
table.
# -*- coding: utf-8 -*-
from zato.server.service import Service
class GetAllUsers(Service):
""" Service to retrieve all users from the database.
"""
def handle(self):
# Obtain a reference to the configured Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Define the SQL query to select all rows from the users table
query = 'select * from users'
# Execute the query; returns a list of dictionaries, one per row
response = conn.execute(query)
# Set the service response to the query result
self.response.payload = response
[
{"user_id":1, "username":"john.doe"},
{"user_id":2, "username":"jane.smith"},
{"user_id":3,"username":"bob.jones"}
]
Explanation:
- The service connects to Oracle using the configured connection.
- It executes a simple SQL query to fetch all user records.
- The result is returned as the service response payload.
2. Querying a Specific User by ID
This service fetches a user by their user_id
using a parameterized query. There are multiple ways to retrieve results depending on whether you expect one or many rows.
# -*- coding: utf-8 -*-
from zato.server.service import Service
class GetUserById(Service):
""" Service to fetch a user by their user_id.
"""
def handle(self):
# Get the Oracle Database connection from the pool
conn = self.out.sql['My Oracle DB']
# Parameterized SQL to prevent injection
query = 'select * from users where user_id = :user_id'
# In a real service, this would be read from incoming JSON
params = {'user_id': 1}
# Execute the query with parameters; returns a list
response = conn.execute(query, params)
# Set the result as the service's response
self.response.payload = response
Explanation:
- The service expects
user_id
in the request payload. - It uses a parameterized query to prevent SQL injection.
- The result is always a list, even if only one row matches.
3. Calling a Stored Procedure with Input and Output Parameters
This service demonstrates how to call an Oracle stored procedure that takes input values and returns output values.
# -*- coding: utf-8 -*-
# Zato
from zato.common.oracledb import NumberIn, NumberOut, StringIn, StringOut
from zato.server.service import Service
class CallProcessData(Service):
""" Service to call a stored procedure with input/output params.
"""
def handle(self):
# Obtain Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Prepare input parameter for NUMBER
in_num = NumberIn(333)
# Prepare input parameter for VARCHAR2
in_str = StringIn('Hello')
# Prepare output parameter for NUMBER (will be written to by the procedure)
out_num = NumberOut()
# Prepare output parameter for VARCHAR2, specify max buffer size (optionally)
out_str = StringOut(size=200)
# Build the parameter list in the order expected by the procedure
params = [in_num, in_str, out_num, out_str]
# Call the stored procedure with the parameters
response = conn.callproc('process_data', params)
# Return the output values as a dictionary in the response
self.response.payload = {
'output_num': out_num.get(),
'output_str': out_str.get()
}
Explanation:
- The service prepares input and output parameters using helper classes.
- It calls the
process_data
procedure with both input and output arguments. - The result includes both output values, returned as a dictionary.
- Note that you always need to provide the parameters for the procedure in the same order as they were declared in the procedure itself.
4. Calling a Procedure Returning Multiple Rows
This service calls a procedure that returns a set of rows (a cursor) and collects the results.
# -*- coding: utf-8 -*-
from zato.common.oracledb import RowsOut
from zato.server.service import Service
class CallGetUsers(Service):
""" Service to call a procedure returning a set of rows.
"""
def handle(self):
# Get Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Prepare a RowsOut object to receive the result set
rows_out = RowsOut()
# Build parameter list for the procedure
params = [rows_out]
# Call the procedure, populating rows
conn.callproc('get_users', params)
# Convert the cursor results to a list of rows
rows = list(rows_out.get())
# Return the list as the service response
self.response.payload = rows
Explanation:
- The service prepares a
RowsOut
object to receive the rows into. That is, the procedure will write rows into this object. - It calls the
get_users
procedure, which populates the rows. - You call
rows_out.get
to get the actual rows from the database. - The rows are converted to a list and returned as the payload.
4. Returning a Single Object
When you know your query will return a single row, you can use conn.one
or conn.one_or_none
for more predictable results:
# -*- coding: utf-8 -*-
from zato.server.service import Service
class GetSingleUserById(Service):
""" # Service to fetch exactly one user or raise if not found/ambiguous.
"""
def handle(self):
# Get the Oracle Database connection
conn = self.out.sql['My Oracle DB']
# Parameterized SQL query
query = 'select * from users where user_id = :user_id'
# In a real service, this would be read from incoming JSON
params = {'user_id': 1}
# conn.one returns a dict if exactly one row, else raises (zero or multiple rows)
result = conn.one(query, params)
# Return the single user as the response
self.response.payload = result
class GetSingleUserOrNoneById(Service):
""" Service to fetch one user, None if not found, or raise an Exception if ambiguous.
"""
def handle(self):
# Get Oracle Database connection
conn = self.out.sql['My Oracle DB']
# SQL with named parameter
query = 'select * from users where user_id = :user_id'
# Extract user_id from payload
params = {'user_id': 1}
# conn.one_or_none returns a dict if one row, None if zero, raises if multiple rows
result = conn.one_or_none(query, params)
# Return dict or None
self.response.payload = result
Explanation:
conn.one(query, params)
will return a single row as a dictionary if exactly one row is found. If no rows or more than one row are returned, it raises an exception.conn.one_or_none(query, params)
will return a single row as a dictionary if one row is found, None if no rows are found, but still raises an exception if more than one row is returned.- Use these methods when you expect either exactly one or zero/one results, and want to handle them cleanly.
Key Concepts Explained
- Connection Management: Zato handles connection pooling and configuration for you. Use
self.out.sql['My Oracle DB']
to get a ready-to-use connection. - Parameterized Queries: Always use parameters (e.g.,
:user_id
) to avoid SQL injection and improve code clarity. - Calling Procedures: Use helper classes (
NumberIn
,StringIn
,NumberOut
,StringOut
,RowsOut
) for input/output arguments and recordsets. - Service Separation: Each service is focused on a single responsibility, making code easier to test and reuse.
Security and Best Practices
- Always use parameterized queries for user input.
- Manage credentials and connection strings securely (never hardcode them in source code).
- Handle exceptions and database errors gracefully in production code.
- Use connection pooling (Zato does this for you) for efficiency.
Summary
Integrating Oracle Database with Python and Zato services gives you powerful tools for building APIs, automating workflows, and connecting enterprise data sources.
Whether you need to run queries, call stored procedures, or expose Oracle data through REST APIs, Zato provides a robust and Pythonic way to do it.
More resources
➤ Python API integration tutorials
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
➤ Open-source iPaaS in Python
Erik Marsja
Pandas: Drop Columns By Name in DataFrames
The post Pandas: Drop Columns By Name in DataFrames appeared first on Erik Marsja.
This blog post will cover Pandas drop columns by name from a single DataFrame and multiple DataFrames. This is a common task when working with large datasets in Python, especially when you want to clean your data or remove unnecessary information. We have previously looked at how to drop duplicated rows in a Pandas DataFrame, and now we will focus on dropping columns by name.
Table of Contents
- How to use Pandas to drop Columns by Name from a Single DataFrame
- Dropping Multiple Columns by Name in a Single DataFrame
- Dropping Columns from Multiple Pandas DataFrames
- Dropping Columns Conditionally from Panda DataFrame Based on Their Names
- Summary
- Resources
How to use Pandas to drop Columns by Name from a Single DataFrame
The simplest scenario is when we have a single DataFrame and want to drop one or more columns by their names. We can do this easily using the drop()
Pandas function. Here is an example:
import pandas as pd
# Create a simple DataFrame
df = pd.DataFrame({
'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [7, 8, 9]
})
# Drop column 'B' by name
df = df.drop(columns=['B'])
print(df)
In the code chunk above, we drop column ‘B’ from the DataFrame df
using the drop()
function. We specify the column to remove by name within the columns
parameter. The operation returns a new DataFrame with the ‘B’ column removed, and the result is assigned back to df
.

Compare it to the original dataframe before column ‘B’ was dropped:

Dropping Multiple Columns by Name in a Single DataFrame
If we must drop multiple columns simultaneously, we can pass a list of column names to the drop()
function. Here is how we can remove multiple columns from a DataFrame:
# Drop columns 'A' and 'C'
df = df.drop(columns=['A', 'C'])
print(df)
In the code above, we removed both columns ‘A’ and ‘C’ from the DataFrame by specifying them in a list. The resulting DataFrame only contains the column ‘B’. Here is the result:

Dropping Columns from Multiple Pandas DataFrames
When working with multiple DataFrames, we might want to drop the same columns by name. We can achieve this by iterating over our DataFrames and applying the drop()
function to each one.
# Create two DataFrames
df1 = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
df2 = pd.DataFrame({'A': [10, 11, 12], 'B': [13, 14, 15], 'C': [16, 17, 18]})
# List of DataFrames
dfs = [df1, df2]
# Drop column 'B' from all DataFrames
dfs = [df.drop(columns=['B']) for df in dfs]
# Print the result
for df in dfs:
print(df)
In the code chunk above, we first added our two DataFrames, df1
and df2
, to a list called dfs
to efficiently perform operations on multiple DataFrames at once. Then, using a list comprehension, we drop column ‘B’ from each DataFrame in the list by applying the drop()
function to each one. The result is a new list of DataFrames with the ‘B’ column removed from each.

Dropping Columns Conditionally from Panda DataFrame Based on Their Names
In some cases, we might not know in advance which columns we want to drop but wish to drop columns based on specific conditions. For instance, we might want to drop all columns that contain a particular string or pattern in their name.
# Drop columns whose names contain the letter 'A'
df = df.drop(columns=[col for col in df.columns if 'A' in col])
print(df)
In the code above, we used a list comprehension to identify columns whose names contain the letter ‘A’. We then dropped these columns from the DataFrame.
Summary
In this post, we covered several ways to pandas drop columns by name in both a single DataFrame and across multiple DataFrames. We demonstrated how to remove specific columns, drop multiple columns at once, and even apply conditions for column removal. These techniques are essential for data cleaning and preparation in Python, especially when working with large datasets. By mastering these methods, you can handle your data more efficiently and streamline your data manipulation tasks.
Feel free to share this post if you found it helpful, and leave a comment below if you would like me to cover other aspects of pandas or data manipulation in Python!
Resources
Here are some more Pandas-related tutorials:
- Pandas Tutorial: Renaming Columns in Pandas Dataframe
- A Basic Pandas Dataframe Tutorial for Beginners
- Six Ways to Reverse Pandas dataframe
The post Pandas: Drop Columns By Name in DataFrames appeared first on Erik Marsja.
Django Weblog
DjangoCon Europe and beyond
Credit: DjangoCon Europe 2025 organizers
We had a blast at DjangoCon Europe 2025, and hope you did too! Events like this are essential for our community, delighting both first-timers and seasoned Djangonauts with insights, good vibes, and all-around inspiration. This year’s conference brought together brilliant minds from all corners of the globe. And featured early celebrations of Django’s 20th birthday! ⭐️🎂🎉
After launching in 2005, Django turns 20 in 2025, and the conference was a great occasion for our community to celebrate this. And work on the sustainability of the project together.
We need more code reviews
Our Django Fellow Sarah Boyce kicked off the conference with a call for more contributions – of the reviewing kind. In her words,
Django needs your help. Every day, contributors submit pull requests and update existing PRs, but there aren't enough reviewers to keep up. Learn why Django needs more reviewers and how you can help get changes merged into core.
We need more fundraising
Our Vice President Sarah Abderemane got on stage to encourage more financial support of Django from attendees, showcasing how simple it is to donate to the project (get your boss to do it!). We have ambitious plans for 2025, which will require us to grow the Foundation’s budget accordingly.
Annual meeting of DSF Members
Our Board members Tom Carrick, Thibaud Colas, Sarah Abderemane, and Paolo Melchiorre were at the conference to organize a meeting of Members of the Django Software Foundation. This was a good occasion to discuss long-standing topics, and issues of the moment, such as:
- Diversity, equity and inclusion. Did you know we recently got awarded the CHAOSS DEI bronze badge? We need to keep the momentum in this area.
- Management of the Membership at the Foundation. With different visions on how much the membership is a recognition or a commitment (or both). There was interest in particular in sharing more calls to action with members.
- Content of the website. A long-standing area for improvement (which we’re working on!)
All in all this was a good opportunity for further transparency, and to find people who might be interested in contributing to those areas of our work in the future.
Birthday celebrations
There was a cake (well, three!). Candles to blow out. And all-around great vibes and smiles, with people taking pictures and enjoying specially-made Django stickers!
Up next
We have a lot more events coming up this year where the Foundation will be present, and bringing celebrations of Django’s 20th birthday!
PyCon US 2025
It’s on, now! And we’re present, with a booth. Come say hi! There will be Django stickers available:
PyCon Italia 2025
Some of the PyCon Italia team was there at DjangoCon Europe to hype up their event – and we’ll definitely be there in Bologna! They promised better coffee 👀, and this will have to be independently verified. Check out their Djangonauts at PyCon Italia event.
EuroPython 2025
We got to meet up with some of the EuroPython crew at DjangoCon Europe too, and we’ll definitely be there at the conference too, as one of their EuroPython community partners 💚. There may well be birthday cake there too, get your tickets!
Django events
And if you haven’t already, be sure to check out our next flagship Django events!
Thank you to everyone who joined us at DjangoCon Europe, and thank you to the team behind the conference in particular ❤️. DjangoCon Europe continues to show the strength and warmth of our community, proving that the best part of Django is truly the people. See you at the next one!
PS: if you’re in Europe and like organizing big events, do reach out to talk about organizing a DjangoCon Europe in your locale in the coming years.
May 14, 2025
Hugo van Kemenade
PEPs & Co.
PEPs #
Here’s Barry Warsaw on the origin of PEPs, or Python Enhancement Proposals (edited from PyBay 2017):
I like backronyms. For those who don’t know: a backronym is where you come up with the acronym first and then you come up with the thing that the acronym stands for. And I like funny sounding words, like FLUFL was one of those. When we were working for CNRI, they also ran the IETF conferences. The IETF is the Internet Engineering Task Force, and they’re the ones who come up with the RFCs. If you look at RFC 822, it defines what an email message looks like.
We got to a point, because we were at CNRI we were more intimately involved in the IETF and how they do standards and things, we observed at the time that there were so many interesting ideas coming in being proposed for Python that Guido really just didn’t have time to dive into the details of everything.
So I thought: well, we have this RFC process, let’s try to mirror some of that so that we can capture the essence of an idea in a document that would serve as a point of discussion, and that Guido could let people discuss and then come in and read the summary of the discussion.
And I was just kind of thinking: well, PEPs, that’s kind of peppy, it’s kind of a funny sounding word. I came up with the word and then I backronymed it into Python Enhancement Proposal. And then I wrote PEP 0 and PEP 1. PEP 0 was originally handwritten, and so I was the first PEP author because I came up with the name PEP.
But the really interesting thing is that you see the E.P. part used in a lot of other places, like Debian has DEPs now. There’s a lot of other communities that have these enhancement proposals so it’s kind of interesting. And then the format of the PEP was directly from that idea of the RFC’s standard.
& Co. #
Here’s a collection of enhancement proposals from different communities.
Are there more? Let me know!
Header photo: Grand Grocery Co., Lincoln, Nebraska, USA (1942) by The Library of Congress, with no known copyright restrictions.
Real Python
How to Get the Most Out of PyCon US
Congratulations! You’re going to PyCon US!
Whether this is your first time or you’re a regular attendee, going to a conference full of people who love the same thing as you is always a fun experience. There’s so much more to PyCon than just a bunch of people talking about the Python language—it’s a vibrant community event filled with talks, workshops, hallway conversations, and social gatherings. But for first-time attendees, it can also feel a little intimidating. This guide will help you navigate all there is to see and do at PyCon.
PyCon US is the biggest conference centered around Python. Originally launched in 2003, this conference has grown exponentially and has even spawned several other PyCons and workshops around the world.
Everyone who attends PyCon will have a different experience, and that’s what makes the conference truly unique. This guide is meant to help you, but you don’t need to follow it strictly.
By the end of this article, you’ll know:
- How PyCon consists of tutorials, conference, and sprints
- What to do before you go
- What to do during PyCon
- What to do after the event
- How to have a great PyCon
This guide contains links that are specific to PyCon 2025, but it should be useful for future PyCons as well.
Free Download: Get a sample chapter from Python Tricks: The Book that shows you Python’s best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.
What PyCon Involves
Before considering how to get the most out of PyCon, it’s first important to understand what PyCon involves.
PyCon is divided into three stages:
-
Tutorials: PyCon starts with two days of three-hour workshops, during which you learn in depth with instructors. These sessions are worth attending because the class sizes are small, and you’ll have the chance to ask instructors questions directly. You should consider going to at least one of these if you can. They have an additional cost of $150 per tutorial.
-
Conference: Next, PyCon offers three days of talks. Each presentation runs for 30 to 45 minutes, and around five talks run concurrently, including a Spanish-language charlas track. But that’s not all: there are open spaces, sponsors, posters, lightning talks, dinners, and so much more.
-
Sprints: During this stage, you can take what you’ve learned and apply it! This is a four-day exercise where people group up to work on various open-source projects related to Python. If you’ve got the time, going to one or more sprint days is a great way to practice what you’ve learned, become associated with an open-source project, and network with other smart and talented people. If you’re still unconvinced, here’s what to expect at this year’s PyCon US sprints. Learn more about sprints from an earlier year in this blog post.
Since most PyCon attendees go to the conference part, that’ll be the focus of this article. However, don’t let that deter you from attending the tutorials or sprints if you can!
You may learn more technical skills by attending the tutorials rather than listening to the talks. The sprints are great for networking and applying the skills you already have, as well as learning new ones from the people you’ll be working with.
What to Do Before You Go
In general, the more prepared you are for something, the better your experience will be. The same applies to PyCon.
It’s really helpful to plan and prepare ahead of time, which you’re already doing just by reading this article!
Look through the talks schedule and see which talks sound most interesting. This doesn’t mean you need to plan out all of the talks you’ll see in every slot possible. But it helps to get an idea of which topics will be presented so that you can decide what you’re most interested in.
Getting the PyCon US mobile app will help you plan your schedule. This app lets you view the schedule for the talks and add reminders for those you want to attend. If you’re having a hard time picking which talks to attend, you can come prepared with a question or problem you need to solve. Doing this can help you focus on the topics that are important to you.
If you can, come a day early to check in and attend the opening reception. The line to check in on the first day is always long, so you’ll save time if you check in the day before. There’s also an opening reception that evening, where you can meet other attendees and speakers and check out the various sponsors and their booths.
If you’re new to PyCon, the Newcomer Orientation can help you learn about the conference and how you can participate.
Read the full article at https://realpython.com/pycon-guide/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog
DSF member of the month - Simon Charette
For May 2025, we welcome Simon Charette as our DSF member of the month! ⭐
Simon Charette is a longtime Django contributor and community member. He served on the Django 5.x Steering Council and is part of the Security team and the Triage and Review team. He has been a DSF member since November 2014.
You can learn more about Simon by visiting Simon's GitHub Profile.
Let’s spend some time getting to know Simon better!
Can you tell us a little about yourself (hobbies, education, etc)
My name is Simon Charette and I'm based in Montréal. I've been contributing to Django for over a decade mainly to the ORM and I have a background in software engineering and mathematics. I work as a principal backend engineer at Zapier where we use Python and Django to power many of our backend services. Outside of Django and work I like to spend time cycling around the world, traveling with my partner, and playing ultimate frisbee.
Out of curiosity, your GitHub profile picture appears to be a Frisbee, is it correct? If so, have you been playing for a long time?
I've been playing ultimate frisbee since college which is around the time I started contributing to Django. It has been a huge part of my life since then as I made many friends and met my partner playing through the years. My commitment to ultimate frisbee can be reflected in my volume of contributions over the past decade as it requires more of my time during certain periods of the year. It also explains why I wasn't able to attend most DjangoCon in spring and fall as this is usually a pretty busy time for me. I took part in the world championships twice and I played in the UFA for about 5 years before retiring three years ago. Nowadays I still play but at a lower intensity level and I am focused on giving back to the community through coaching.
How did you start using Django?
Back in college I was working part time for a web agency that had an in house PHP framework and was trying to determine which tech stack and framework they should migrate to in order to ease onboarding of their developers and reduce their maintenance costs. I was tasked, with another member of the team, to identify potential candidates and despite my lack of familiarity with Python at the time we ended up choosing Django over PHP's Symphony mainly because of its spectacular documentation and third-party app ecosystem.
What other framework do you know and if there is anything you would like to have in Django if you had magical powers?
If I had magical powers I'd invent Python ergonomics to elegantly address the function coloring problem so it's easier for Django to be adapted to an async
-ready world. I'm hopeful that the recent development on the GIL removal in Python 3.13+ will result a renewed interest in the usage of threading, which Django is well equipped to take advantage of, over the systematic usage of an event loop to deal with web serving workloads as the async world comes with a lot of often overlooked drawbacks.
What projects are you working on now?
I have a few Django related projects I'm working on mainly relating to ORM improvements (deprecating extra
, better usage of RETURNING
when available) but the main one has been a tool to keep track of the SQL generated by the Django test suite over time to more easily identity unintended changes that still pass the test suite. My goal with this project is to have a CI invokable command that would run the full Django test suite and provide a set of tests that generated different SQL compared to the target branch so its much easier to identify unintended side effects when making invasive changes to the ORM.
Which Django libraries are your favorite (core or 3rd party)?
- DRF
- django-filter
django-seal
(shameless plug)
What are the top three things in Django that you like?
- The people
- The ORM, unsurprisingly
- The many entry points the framework provides to allow very powerful third-party apps to be used together
You've contributed significantly to improving the Django ORM. What do you believe is the next big challenge for Django ORM, and how do you envision it evolving in the coming years?
The ORM's expression interface is already very powerful but there are effectively some remaining rough edges. I believe that adding generalized support for composite virtual fields (a field composed of other fields) could solve many problems we currently face with how relationships are expressed between models as we currently lack a way to describe an expression that can return tuples of values internally. If we had this building block, adding a way to express and compose table expressions (CTE, subquery pushdown, aggregation through subqueries) would be much easier to implement without denaturing the ORM by turning it into a low level query builder. Many of these things are possible today (e.g. django-cte) but they require a lot of SQL compilation and ORM knowledge and can hardly be composed together.
How did you start to contribute to the ORM? What would be the advice you have for someone interested to contribute to this field?
I started small by fixing a few issues that I cared about and by taking the time to read through Trac, mailing lists, and git-blame
for changes in the area that were breaking tests as attempted to make changes. One thing that greatly helps in onboarding on the ORM is to at least have some good SQL fundamentals. When I first started I already had written a MSSQL ORM in PHP which helped me at least understand the idea behind the generation of SQL from a higher level abstraction. Nowadays there are tons of resources out there to help you get started on understand how things are organized but I would suggest this particular video where I attempt to walk through the different phases of SQL generation.
Is there anything else you’d like to say?
It has been a pleasure to be able to be part of this community for so long and I'd like to personally thank Claude Paroz for initially getting me interested in contributing seriously to the project.
Thank you for doing the interview, Simon !
eGenix.com
eGenix Antispam Bot for Telegram 0.7.1 GA
Introduction
eGenix has long been running a local user group meeting in Düsseldorf called Python Meeting Düsseldorf and we are using a Telegram group for most of our communication.
In the early days, the group worked well and we only had few spammers joining it, which we could well handle manually.
More recently, this has changed dramatically. We are seeing between 2-5 spam signups per day, often at night. Furthermore, the signups accounts are not always easy to spot as spammers, since they often come with profile images, descriptions, etc.
With the bot, we now have a more flexible way of dealing with the problem.
Please see our project page for details and download links.
Features
- Low impact mode of operation: the bot tries to keep noise in the group to a minimum
- Several challenge mechanisms to choose from, more can be added as needed
- Flexible and easy to use configuration
- Only needs a few MB of RAM, so can easily be put into a container or run on a Raspberry Pi
- Can handle quite a bit of load due to the async implementation
- Works with Python 3.9+
- MIT open source licensed
News
The 0.7.1 release fixes a few bugs and adds more features:
- Added missing dependency on emoji package to setup (bug introduced in 0.7.0, fixed in 0.7.1)
- Added user name check for number of emojis, since these are being used a lot by spammers
- Added wheel as requirement, since this is no longer included per default
- Updated copyright year
It has been battle-tested in production for several years already
and is proving to be a really useful tool to help with Telegram group
administration.
More Information
For more information on the eGenix.com Python products, licensing and download instructions, please write to sales@egenix.com.
Enjoy !
Marc-Andre Lemburg, eGenix.com
May 13, 2025
PyCoder’s Weekly
Issue #681: Loguru, GeoDjango, flexicache, and More (May 13, 2025)
#681 – MAY 13, 2025
View in Browser »
How to Use Loguru for Simpler Python Logging
In this tutorial, you’ll learn how to use Loguru to quickly implement better logging in your Python applications. You’ll spend less time wrestling with logging configuration and more time using logs effectively to debug issues.
REAL PYTHON
Maps With Django: GeoDjango, Pillow & GPS
A quick-start guide to create a web map with images, using the Python-based Django web framework, leveraging its GeoDjango module, and Pillow, the Python imaging library, to extract GPS information from images.
PAOLO MELCHIORRE
From try/except to Production Monitoring: Learn Python Error Handling the Right Way
This guide starts with the basics—errors vs. exceptions, how try/except works—and builds up to real-world advice on monitoring and debugging Python apps in production with Sentry. It’s everything you need to go from “I think it broke?” to “ai autofixed my python bug before it hit my users.” →
SENTRY sponsor
Exploring flexicache
flexicache
is a cache decorator that comes with the fastcore library. This post describes how it’s arguments give you finer control over your caching.
DANIEL ROY GREENFELD
Python Jobs
Senior Software Engineer – Quant Investment Platform (LA or Dallas) (Los Angeles, CA, USA)
Causeway Capital Management LLC
Articles & Tutorials
Gen AI, Knowledge Graphs, Workflows, and Python
Are you looking for some projects where you can practice your Python skills? Would you like to experiment with building a generative AI app or an automated knowledge graph sentiment analysis tool? This week on the show, we speak with Raymond Camden about his journey into Python, his work in developer relations, and the Python projects featured on his blog.
REAL PYTHON podcast
Sets in Python
In this tutorial, you’ll learn how to work effectively with Python’s set
data type. You’ll learn how to define set
objects and discover the operations that they support. By the end of the tutorial, you’ll have a good feel for when a set
is an appropriate choice in your programs.
REAL PYTHON
The Magic of Software
This article, subtitled “what makes a good engineer also makes a good engineering organization” is all about how we chase the latest development trends by the big corps, even when they have little bearing on your org’s success.
MOXIE MARLINSPIKE
Using the Python subprocess
Module
In this video course, you’ll learn how to use Python’s subprocess module to run and control external programs from your scripts. You’ll start with launching basic processes and progress to interacting with them as they execute.
REAL PYTHON course
Q&A With the PyCon US 2025 Keynote Speakers
Want to learn more about the PyCon US keynote speakers? This interview asked each of them the same five questions, ranging from how they got into Python to their favorite open source project people don’t know enough about.
LOREN CRARY
Making PyPI’s Test Suite 81% Faster
Trail of Bits is a security research company that sometimes works with the folks at PyPI. Their most recent work reduced test execution time from 163 seconds down to 30. This post describes how they accomplished that.
ALEXIS CHALLANDE
pre-commit
: Install With uv
pre-commit is Adam’s favourite Git-integrated “run things on commit” tool. It acts as a kind of package manager, installing tools as necessary from their Git repositories. This post explains how to use it with uv
.
ADAM JOHNSON
5 Weirdly Useful Python Libraries
This post describes five different Python libraries that you’ve probably never heard of, but very well may love using. Topics include generating fake data and making your computer talk.
DEV
Developer Trends in 2025
Talk Python interviews Gina Häußge, Ines Montani, Richard Campbell, and Calvin Hendryx-Parker and they talk about the recent Stack Overflow Developer survey results.
KENNEDY ET AL podcast
The Future of Textualize
Will McGugan, founder of Textualize the company has announced that they will be closing their doors. Textualize the open source project will remain.
WILL MCGUGAN
Asyncio Demystified: Rebuilding It One Yield at a Time
Get a better understanding of how asyncio works in Python, by building a lightweight version from scratch using generators and coroutines.
JEAN-BAPTISTE ROCHER • Shared by Jean-Baptiste Rocher
Projects & Code
Build Python GUI’s Using Drag and Drop
GITHUB.COM/PAULLEDEMON • Shared by Paul
Events
PyCon US 2025
May 14 to May 23, 2025
PYCON.ORG
PyData Bristol Meetup
May 15, 2025
MEETUP.COM
PyLadies Dublin
May 15, 2025
PYLADIES.COM
PyGrunn 2025
May 16 to May 17, 2025
PYGRUNN.ORG
Flask Con 2025
May 16 to May 17, 2025
FLASKCON.COM
Happy Pythoning!
This was PyCoder’s Weekly Issue #681.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
PyCharm
We’re excited to launch the second edition of our User Experience Survey for DataGrip and the Database Tools & SQL Plugin!
Your feedback from the previous survey helped us better understand your needs and prioritize the features and improvements that matter most to you.
Thanks to your input, we’ve already delivered a first set of enhancements focused on improving your experience:
- Faster introspection for MySQL and MariaDB (with more DBMS support coming soon!)
- New Quick Start Guide with sample database
- Non-modal Create and Modify dialogs
- AI-powered Fix and Explain SQL errors
- Better database context integration within the AI Assistant
- And much more: What’s New in DataGrip
Now, we’d love to hear from you again! Have these improvements made a difference for you? What should we focus on next to better meet your needs?
The survey takes approximately 10 minutes to complete.
As a thank you, everyone who provides meaningful feedback will be entered to win:
- A $100 Amazon Gift Card
- A 1-year JetBrains All Products Pack (individual license)
Thank you for helping us build the best database tools!
DataGrip and Database Tools UX Survey #2
Real Python
Working With Missing Data in Polars
Efficiently handling missing data in Polars is essential for keeping your datasets clean during analysis. Polars provides powerful tools to identify, replace, and remove null values, ensuring seamless data processing.
This video course covers practical techniques for managing missing data and highlights Polars’ capabilities to enhance your data analysis workflow. By following along, you’ll gain hands-on experience with these techniques and learn how to ensure your datasets are accurate and reliable.
By the end of this video course, you’ll understand that:
- Polars allows you to handle missing data using LazyFrames and DataFrames.
- You can check for null values in Polars using the
.null_count()
method. - NaN represents non-numeric values while null indicates missing data.
- You can replace NaN in Polars by converting them to nulls and using
.fill_null()
. - You can fix missing data by identifying, replacing, or removing null values.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Daniel Roy Greenfeld
Exploring flexicache
An exploration of using flexicache for caching in Python.
Real Python
Quiz: Getting Started With Python IDLE
In this quiz, you’ll test your understanding of Python IDLE.
Python IDLE is an IDE included with Python installations, designed for basic editing, execution, and debugging of Python code. You can also customize IDLE to make it a useful tool for writing Python.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Luke Plant
Knowledge creates technical debt
The term technical debt, now used widely in software circles, was coined to explain a deliberate process where you write software quickly to gain knowledge, and then you have to use that knowledge gained to improve your software.
This perspective is still helpful today when people speak of technical debt as only a negative, or only as a result of bad decisions. Martin Fowler’s Tech Debt Quadrant is a useful antidote to that.
A consequence of this perspective is that technical debt can appear at any time, apparently from nowhere, if you are unfortunate enough to gain some knowledge.
If you discover a better way to do things, the old way of doing it that is embedded in your code base is now “debt”:
you can either live with the debt, “paying interest” in the form of all the ways that it makes your code harder to work with;
or you can “pay down” the debt by fixing all the code in light of your new knowledge, which takes up front resources which could have been spent on something else, but hopefully will make sense in the long term.
This “better way” might be a different language, library, tool or pattern. In some cases, the better way has only recently been invented. It might be your own personal discovery, or something industry wide. It might be knowledge gained through the actual work of doing the current project (which was Ward Cunningham’s usage of the tem), or from somewhere else. But the end result is the same – you know more than you did, and now you have a debt.
The problem is that this doesn’t sound like a good thing. You learn something, and now you have a problem you didn’t have before, and it’s difficult to put a good spin on “I discovered a debt”.
But from another angle, maybe this perspective gives us different language to use when communicating with others and explaining why we need to address technical debt. Rather than say “we have a liability”, the knowledge we have gained can be framed as an opportunity. Failure to take the opportunity is an opportunity cost.
The “pile of technical debt” is essentially a pile of knowledge – everything we now think is bad about the code represents what we’ve learned about how to do software better. The gap between what it is and what it should be is the gap between what we used to know and what we now know.
And fixing that code is not “a debt we have to pay off”, but an investment opportunity that will reap rewards. You can refuse to take that opportunity if you want, but it’s a tragic waste of your hard-earned knowledge – a waste of the investment you previously made in learning – and eventually you’ll be losing money, and losing out to competitors who will be making the most of their knowledge.
Finally, I think phrasing it in terms of knowledge can help tame some of our more rash instincts to call everything we don’t like “tech debt”. Can I really say “we now know” that the existing code is inferior? Is it true that fixing the code is “investing my knowledge”? If it’s just a hunch, or a personal preference, or the latest fashion, maybe I can both resist the urge for unnecessary rewrites, and feel happier about it at the same time.
Talk Python to Me
#505: t-strings in Python (PEP 750)
Python has many string formatting styles which have been added to the language over the years. Early Python used the % operator to injected formatted values into strings. And we have string.format() which offers several powerful styles. Both were verbose and indirect, so f-strings were added in Python 3.6. But these f-strings lacked security features (think little bobby tables) and they manifested as fully-formed strings to runtime code. Today we talk about the next evolution of Python string formatting for advanced use-cases (SQL, HTML, DSLs, etc): t-strings. We have Paul Everitt, David Peck, and Jim Baker on the show to introduce this upcoming new language feature.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/connect'>Posit</a><br> <a href='https://talkpython.fm/auth0'>Auth0</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <h2 class="links-heading">Links from the show</h2> <div><strong>Guests:</strong><br/> <strong>Paul on X</strong>: <a href="https://x.com/paulweveritt?featured_on=talkpython" target="_blank" >@paulweveritt</a><br/> <strong>Paul on Mastodon</strong>: <a href="https://fosstodon.org/@pauleveritt" target="_blank" >@pauleveritt@fosstodon.org</a><br/> <strong>Dave Peck on Github</strong>: <a href="https://github.com/davepeck/?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Jim Baker</strong>: <a href="https://github.com/jimbaker?featured_on=talkpython" target="_blank" >github.com</a><br/> <br/> <strong>PEP 750 – Template Strings</strong>: <a href="https://peps.python.org/pep-0750/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>tdom - Placeholder for future library on PyPI using PEP 750 t-strings</strong>: <a href="https://github.com/t-strings/tdom?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>PEP 750: Tag Strings For Writing Domain-Specific Languages</strong>: <a href="https://discuss.python.org/t/pep-750-tag-strings-for-writing-domain-specific-languages/60408?featured_on=talkpython" target="_blank" >discuss.python.org</a><br/> <strong>How To Teach This</strong>: <a href="https://peps.python.org/pep-0750/#how-to-teach-this" target="_blank" >peps.python.org</a><br/> <strong>PEP 501 – General purpose template literal strings</strong>: <a href="https://peps.python.org/pep-0501/?featured_on=talkpython" target="_blank" >peps.python.org</a><br/> <strong>Python's new t-strings</strong>: <a href="https://davepeck.org/2025/04/11/pythons-new-t-strings/?featured_on=talkpython" target="_blank" >davepeck.org</a><br/> <strong>PyFormat: Using % and .format() for great good!</strong>: <a href="https://pyformat.info?featured_on=talkpython" target="_blank" >pyformat.info</a><br/> <strong>flynt: A tool to automatically convert old string literal formatting to f-strings</strong>: <a href="https://github.com/ikamensh/flynt?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Examples of using t-strings as defined in PEP 750</strong>: <a href="https://github.com/davepeck/pep750-examples/?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>htm.py issue</strong>: <a href="https://github.com/jviide/htm.py/issues/11?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Exploits of a Mom</strong>: <a href="https://xkcd.com/327/?featured_on=talkpython" target="_blank" >xkcd.com</a><br/> <strong>pyparsing</strong>: <a href="https://github.com/pyparsing/pyparsing?featured_on=talkpython" target="_blank" >github.com</a><br/> <strong>Watch this episode on YouTube</strong>: <a href="https://www.youtube.com/watch?v=WCWNeZ_rE68" target="_blank" >youtube.com</a><br/> <strong>Episode transcripts</strong>: <a href="https://talkpython.fm/episodes/transcript/505/t-strings-in-python-pep-750" target="_blank" >talkpython.fm</a><br/> <br/> <strong>--- Stay in touch with us ---</strong><br/> <strong>Subscribe to Talk Python on YouTube</strong>: <a href="https://talkpython.fm/youtube" target="_blank" >youtube.com</a><br/> <strong>Talk Python on Bluesky</strong>: <a href="https://bsky.app/profile/talkpython.fm" target="_blank" >@talkpython.fm at bsky.app</a><br/> <strong>Talk Python on Mastodon</strong>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" ><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <strong>Michael on Bluesky</strong>: <a href="https://bsky.app/profile/mkennedy.codes?featured_on=talkpython" target="_blank" >@mkennedy.codes at bsky.app</a><br/> <strong>Michael on Mastodon</strong>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" ><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>