German Perl/Raku Workshop 2026 (Berlin): a report
Last week, the 28th edition of the German Perl/Raku Workshop took place in Berlin. It was great to see some familiar faces (as well as some new ones!) to discuss computing, our favourite Swiss army chainsaw, and much more.
This report contains rough notes of the talks that I attended.

Getting to Berlin
Because I’m a bit crazy, and because I wanted to collect up-to-date and relevant data before my talk, I rode my bike from Hannover to Berlin. The trip was roughly 300 km long and took two days. The weather was pretty good, and I only had a nasty headwind for part of the first day. The second day went quite quickly, and once in Berlin, I decided to be a bit of a tourist for a little while and rode to the Brandenburg Gate.

Since there were several demonstrations in Berlin on that day, the Straße des 17. Juni (the road leading from the victory column to the Brandenburg Gate) was completely closed to traffic. This meant that people were walking and riding their bikes along what is usually three lanes of traffic in each direction. That was an opportunity I was glad to take! Like, when am I going to get that kind of opportunity again? Needless to say, I thoroughly enjoyed the freedom and space of riding along such a famous street.
The next day (Sunday), I happened to meet Geoffrey and Theo as they were on their way to breakfast. I didn’t expect to stumble across anyone from the workshop so soon, and it was nice to have a chat and hear how they were and what they were up to. They went off to do touristy stuff, but I decided that, after two days on the bike, spending some time at a cafe with my laptop was the more relaxing option.
Pre-workshop meetup
The pre-workshop event took place in a Bavarian-style pub a few blocks’ walk from the workshop venue. I found it amusing that we were in a Bavarian pub when we were in Berlin. However, there’s probably no such thing as a Berlin-style pub; it’s probably just a pub. The workshop took place in Munich last year, so in some sense the pre-workshop meetup venue made a connection to last year’s workshop. Given that we were the last people to leave (the staff had to be rather insistent to get us to pay and leave; they were very nice about it, though), I’d say that the conversation was at its usual excellent level.
Day 1
Last year’s workshop had a very relaxed schedule, and this year was no different. I far prefer this style to the stress of heaps of talks jammed in between 8 am and 6 pm. It’s much more laid-back and gives more time for discussions. One also has more time to process the information presented in the talks. Many thanks to the organisers for putting this together so well!
Max Maischein (Corion) - Using Coding Assistants with Perl
Although Max doesn’t use much Perl or AI in his job, he’d looked into it anyway, basically because it’s the current hype and he wanted to see how and where coding assistants can be useful. A coding assistant is built from a coding harness and a large language model. A harness is then a command execution and loop environment. Currently, common harnesses include Claude, OpenCode and Codex, whereas the models that one can use with these harnesses include Claude AI (which costs currently ~20€/month), z.ai (72€/year) or Deepseek (which is self-hosted).
There are yet others, such as OpenClaw, which act as both a harness and a model. The big deal here is that OpenClaw also has access to your email, bank account, messenger services and identity, thus it’s supposed to be able to help people organise their lives. However, the security and financial implications of letting a statistical inference engine say things for you and make transactions on your behalf, shall we say, are fraught with difficulties. Max mentioned the lethal trifecta: access to private data, ability to externally communicate, and exposure to untrusted content. It’s not a good idea to give automated services so much access, as things can go wrong in very bad ways. To avoid potential issues, Max mentioned some advice:
- “Don’t YOLO outside of a container”
- When giving an agent commit access to a repository, let it work on a copy of the repo, not the main repository itself
- Don’t give the agent any credentials
- Run agents within a VM/container
Although there are issues with using agents wantonly, they can still be useful. One can use them to generate code, run that code, check the output (e.g. via tests), and work from lists. The quality depends upon the size of the task, the quality of the surrounding code and the quality of the prompt given to the agent.
Max found that LLMs know Perl rather well, yet the style is from around
Perl 5.12. They know about the need for use strict; use warnings;, which
took years to bang into the heads of humans and can cope with function
signatures, possibly because these are common in other languages. The
coding agents mimic pre-existing code well. If one has good prompts (there
exist examples on, e.g. GitHub) that provide a walkthrough of good software
development, then the agent is reminiscent of an enthusiastic junior
developer.
He then discussed some example projects for which he’d used coding agents and the pluses and minuses he’d discovered. Sometimes things were hard-coded when they shouldn’t have been. There was extra code added for testing, but it wasn’t used in the actual code. He found that he needed to provide the agent with good API examples, good implementation ideas, and general program structure to get acceptable output. It’s possible to give agents templates of the desired coding style so that the output matches the way one wants code to look.
Max also discussed some of the social, economic and other consequences of using coding agents.
From a social perspective, the threshold for implementing a tool is massively reduced; it’s now much easier to implement an idea. Due to the low quality of code output, they’re ok for throwaway debugging tools, but not so much for senior-dev level production code. There is also the issue of money flows and their concentration in a few select companies. The point was also made about “pulling the ladder up”, meaning that junior developers aren’t really involved in programming anymore because so much is handled by the coding agent. The level of structured thought required for programming is not really there; one describes feature lists instead. Also, one doesn’t get the opportunity to make errors and learn from them in the same way. Debugging code becomes more difficult since it’s easier to debug code one has written oneself, and the skill itself atrophies somewhat because junior devs are less likely to want to do this.
One long-term economic consequence was based on the subscription model that most hosters use. They want the service to be useful enough to you that you spend lots of money on tokens, but not so much that you want to cancel your subscription. From what I understand, many of the companies aren’t even covering their costs from the subscription income, so it’s hard to see how this model will continue to work long-term.
The talk was an informative and thought-provoking introduction to the workshop’s overall theme: Agentic Perl.
Abigail - Sharding a database, twice
Abigail has recently retired after having spent most of his working life at Booking.com. One of the experiences he had of working there was of sharding the enormous database behind Booking’s site to solve performance and size problems.
The talk wasn’t about how to shard a database–this has been described elsewhere–it gave more of the background information as to why one would want to shard a database and some of the technical challenges in doing this. A “shard” is a horizontal partition of data in a database. By sharding a database, one can reduce the disk, memory and network requirements for access to the database, especially if the database is exceedingly large.
The database at booking.com stores hotel inventory and contains ~1000 tables spanning 14 years of data. There are roughly 10^11 rows, with 10^10 to 10^11 row reads per day and 10^9 to 10^10 row updates per day. The processing was split across many read-only slave instances, yet there was only a single master and in 2013, the disks were filling up, and there was a lot of replication delay. Any downtime causes millions of Euros in lost income.
After much discussion, the booking people decided to shard the database, with the requirements that there be: no downtime, no big switch to the new system, it needed to be reversible, all data for a single hotel should be in the same shard, a shard could not influence others, it needed to be easy to reshard, and the code changes needed to be minimal.
In the end, they decided to split the database into four parts. This required updates to over 100 Perl files. They included fallbacks so that code still using legacy interfaces would still work while the new code that understood the sharding also did its job properly. They moved the data in batches, initially starting small and proceeding to larger chunks of data as they gained confidence that everything was working properly. For instance, they moved a single hotel at a time, starting with test hotels, then new hotels, then small countries and large countries and later ran everything in parallel. Once everything was moved, they removed the legacy DB handling code. The whole process from the initial meeting to all data having been moved took about 6 months.
In 2024, the database was hitting its limits again: disk space was running out, there were deadlocks and wait timeouts, as well as various I/O limits being hit. They decided to double the number of shards and used a similar process to what they used back in 2013. This time, there was a bug in the code, which caused duplicate data to appear in queries, which meant that the site showed twice the number of rooms available in hotels. Oops! Fortunately, the bug was only in one location, and they were able to fix the problem quickly, which in the end only took 30 seconds. Afterwards, they could clean up the duplicated data. This required every row in the database to be inspected twice and then for the tables to be rebuilt to reclaim space. This process took roughly 2 months to complete.
In the end, everything worked out, and the site is still chugging along well. I was left wondering if it will be another ten years before Booking needs to shard its databases again…
Lars Dɪᴇᴄᴋᴏᴡ (daxim) - Der Datentyp und die Datenbank
It turns out that it’s rather difficult to handle and store complex data types in an SQL database. It is, however, possible to get close, but it’s a lot of work. This talk explained some of the details of how to achieve that.
One can combine the available types in SQL with Boolean AND and OR to create more complex types via the UNION keyword. It would be a lot easier to do such things if database creators added native support to the databases, but it seems that this won’t happen, as SQL isn’t intended as a general-purpose programming language (something which would have the ability to create complex types from more basic types).
One example presented of representing a complex data type in a database was that of a payment type. This could be, e.g. cash, online or via credit card. The overall type is a payment, but then one can have more specific types and store these sensibly within the database. Lars showed how to do this, but it wasn’t simple, and one had to be careful about how one ordered the construction of the types and tables in SQL so that everything worked.
Flavio S. Glock - PerlOnJava: A Perl Distribution for the JVM Part 1
This was one of the most impressive talks at the workshop. The sheer amount of work, spread over many projects and many years, in order for this to work, was mind-blowing. Put very simply, Flavio has managed to get the Perl compiler and runtime working within the JVM. This is not an interpreter wrapping a Perl binary; this is Perl code compiled to native JVM bytecode, hence it can be used alongside Java, Kotlin or Scala code. His system requires Java 21+ and targets Perl 5.42+. This first talk was a high-level overview of the PerlOnJava project and some of its results.
Using the JVM makes sense in many ways because the JVM has 30 years of optimisations, meaning that its JIT ability is now very good. Also, there are 500k+ libraries that one now has access to. The JVM is also container-aware and has support for Docker/Kubernetes. And Perl can run alongside other languages which already run on the JVM, of which there are now many.
He’s managed to get Perl scripts to run unchanged on the JVM. It’s also
possible to embed Perl in Java applications. The PerlOnJava application is
a single jar file and has no external dependencies. It’s possible to run
programs via java -jar ... or via a wrapper ./jperl script.pl.
The PerlOnJava project is derived from many earlier attempts to get Perl to run on Java and is most directly derived from the perlito project. There are 200,000 tests in the test suite and 400 Java files. Everything is JIT-optimised.
It’s possible to run pure Perl CPAN modules as-is, and there are XS modules
which can use the Java equivalents. The standard Perl test suites also work
as-is in this environment. Because of the large amount of optimisation in
the JVM, some code runs 5x faster than a pure Perl environment. However,
for string operations, Perl is 2x faster than PerlOnJava. Flavio was able
to compile and run Image::ExifTool, which is a very large distribution.
This is so large that it was too big for the JVM’s 64KB method size limit,
yet he managed to get this to work.
Flavio S. Glock - PerlOnJava: A Perl Distribution for the JVM Part 2
The second part of this talk was the technical deep dive into PerlOnJava.
Unicode was a problem because Java handles this differently from how Perl does it, but this now works as expected.
Flavio needed to handle Perl’s special blocks, such as BEGIN, END,
INIT, and CHECK, mapping them to equivalents in the JVM world.
It’s sometimes possible for large Perl subs to exceed the JVM’s 64KB method
size limit. The way to solve this was to have PerlOnJava fall back to an
internal VM for oversized methods. The second backend was also necessary
due to CPU cache pressure: sparse JVM bytecode overflows the instruction
caches, hence one needs to fall back to another backend. Also, things like
eval STRING need to be handled in the internal VM because using eval
like this doesn’t meet the JVM’s security measures, and if one had left it
as-is, then the JVM would be running the same code each time, making things
much slower. The second backend helps with getting this kind of code to run
with better performance.
Flavio also handled scalars, arrays, hashes, subs/methods, and globs specially so that they translate nicely to their JVM equivalents. He described LOTS of detail about how to get Perl to work within the JVM, and there was a lot of stuff to cover to get Perl code to run as-is within the JVM. One comment he made summed this up nicely: “Perl was never designed for the JVM - but careful engineering makes it work”.
There were some limitations, because not everything maps 100% from Perl to
the JVM: fork isn’t available on the JVM, and there didn’t seem to be a
solution to that just yet. Also, DESTROY can’t work because of the
nondeterministic nature of the JVM’s garbage collection.
Nevertheless, this was impressive work! Wow!
Lars Dɪᴇᴄᴋᴏᴡ (daxim) - Aus dem Nähkästchen
In this talk, Lars showed technical things that help him personally and at work.
For instance, he shares his dotfiles in a repo on GitHub.
There are some tools available to get better diffs in git than the
defaults. E.g. delta, difft and the --color-words option to the
standard diff command.
Some tools were replacements for existing and common Unix tools, which also
enhance the standard behaviour. For example: eza is a replacement for ls,
zoxide is a replacement for cd, and xh is a replacement for curl.
He also mentioned mise, which is a version manager for programming
languages and tools. Instead of using perlbrew, rustup, and so on, one
only needs the single tool.
Handy to know!
Thomas Klausner (domm) - Using class
In this talk, domm introduced us to the class keyword that has been
available since Perl 5.38. Perl also has the field and method keywords,
which make it possible to write modern OOP code in core Perl. field is
like has in Moose. The Object::Pad project is being used as a test
bed for new OOP features in core Perl, and slowly, the stable features from
that distribution are making their way into the core language. Paul Evans
has been doing a huge amount of work to get this going.
domm showed how he used the new Perl OOP features to handle conversion from one bibliographic format to another (MAB2 -> MARC21).
MAB2 (Maschinelle Austauschformat für Bibliotheken) is from 1973, and MARC21 is originally from the 1960s and was updated in 1999 and is now XML-based.
domm wrote a set of classes to handle conversion from one format to the other.
Some notes about Perl’s new OOP features:
- The
:readerkeyword automatically creates an accessor from a field. - The
:paramkeyword allows the field to be set at instantiation. - The
ADJUSTkeyword allows code to be called during object construction. methoduses function signatures directly; automatically provides$self, thus it’s no longer necessary to domy $self = shift;anymore.
It was nice to see the new OOP features being used in a real-world project
rather than the standard Point class or something similar. Real-world
code tends to bump up against the sharp edges of reality more, and hence, one
can better see if the new features are applicable in other less-than-pure
situations.
Lee Johnson - I Bought A Scanner (No, Really This Time)
In a previous talk, Lee had discussed a scanner that he’d been thinking of buying, but it turned out to be waaaay too expensive and much too outdated to use. Its quality was second-to-none, though, hence why he’d considered the purchase in the first place. This would have been one way for him to get scans of photos from his various multi-year photographic projects that would have good quality. This new talk described the scanner he ended up getting.
One thing I didn’t know is that the new security scanners at airports destroy photographic film. I guess there just aren’t that many people going through airports with old-fashioned film anymore. This makes things rather difficult for people who travel with old-style cameras, and hence Lee travels with a film-based camera much less frequently now.
The scanner he’d originally considered, which would allow him to scan large prints in good quality, required proprietary software, was last updated in 2012 and required a 32-bit architecture to run. Thus, he needed an old Mac running a very old version of MacOSX for things to work at all. Thus, the system can’t ever be updated, and it’s difficult to know if the hardware and software will continue working in the future. Also, the connector used FireWire, which was dodgy, breaking down and having connection issues, so that made things even harder. The scanner itself was EOL, and the physical ports were dying.
One alternative that he considered was to use a high-resolution camera to take digital photos of the film. This did seem a bit odd because one is taking a photo of a photo just so that one can get a digital rendition of the original. I can totally see the point, though, as in essence that’s what a scanner is doing, albeit in much better quality. One issue with using a digital camera is keeping the film flat, because even a small amount of curvature in the film creates a large change in the scan.
The scanner he found and bought was cheaper than the previous scanner he was looking at, and is basically an older model of what he’d been looking at. One positive aspect was that it used SCSI, which is more robust than FireWire. Although using a digital camera to scan film can come close to the quality of what the scanner created, it wasn’t good enough. The scanner scanned the images well within specifications (Lee used a special reference to check).
The main downside is that the scanner only works with a MacG4, and it won’t last forever. Effectively, the scanner will stop working due to the “upgrade treadmill”, which Lee then discussed in depth. He mentioned many instances and issues with trying to keep old hardware and software running just to use what is, in reality, still good equipment. He mentioned that the upgrade treadmill seems to be getting faster or is becoming steeper; some people at work are effectively employed to just keep updating dependencies for the main software systems. This seems like a crazy situation to be in; however, I’ve experienced it myself at previous jobs. Keeping up with the updates of dependencies, or swapping out dependencies if they go EOL and one has to find a replacement, took up a lot of time from what could have been spent on actual software development.
Lee has now been able to get high-quality scans of many of the photos from his various projects, so that part of the story had a happy end.
Day 2
Richard Jelinek (TheWhip) - Perl mit AI
This talk was split into two parts: the first part described the trials and tribulations he’d faced in installing and setting up controlling and monitoring systems in houses for friends and family. The second part presented his implementation of a JIT-ed and parallel Perl.
When constructing the various houses, he wanted to use a Perl-based solution
for the controlling and monitoring system. However, one system,
MisterHouse, was dead, and another, FHEM, was difficult to use. In the
end, he wrote his own and, to a certain degree, created his own hardware for
the monitoring solution. A lot of this was guided by AI to get all of the
various parameters within the desired specifications and to get the code
written.
Richard had wished for a Perl that ran in parallel for a long time and mentioned his wish-list for such functionality. He then implemented a system of his own with the help of AI and showed a demonstration of using his parallel and JIT-ed Perl to render the Mandelbrot set.
Alexander Thurow (Alex) - Thoughts on (Modern?) Software Development - Beobachtungen von einer 21-jährigen Reise
Alex shared lots of advice and anecdotes from many years of programming and helping teams do software development. He’s now a freelancer doing software development, mentoring, communication, and software development culture; the slides from his talk are available on https://onmoderndev.de.
He discussed diverse “soft” topics in software development and the interactions between people when doing software development. He noted that continual learning is the key to mastery in every field. It’s difficult to deal with differing requirements in projects, e.g. quality versus time/budget, maintainability versus time-to-market, etc. He mentioned that communication is one of the most difficult things we can do as humans. And because code is communication, it’s important to realise that if you think that code is completely stupid, then it’s a sign that one doesn’t know what the pressures were that created the code. In other words, context is very important when reading old code from someone else.
He also mentioned that in IT, we seem to work in cycles; topics, themes, frameworks, and ideas keep repeating. Thus, if one is aware of the basic patterns, then one can adapt to the latest trends and frameworks.
He also discussed several challenges at the micro and macro-level of software development and their effect, e.g. at a societal level.
He recommended many books, talks, and blog posts, for example:
- Book: “The mythical man-month” by Fred Brooks
- Book: “Pragmatic thinking and learning: refactor your wetware” by Andy Hunt
- Talk: Fantastic biases and where to find them in software development (Michael Kutz and João Proença)
- Blog series about refactoring: https://www.digdeeproots.com/articles/on/naming-process/
Lars Dɪᴇᴄᴋᴏᴡ (daxim) - Hierarchien in SQL
Lars showed how to describe graph-like hierarchies in SQL. It wasn’t 100% clear to me where the impetus came to do this, but it was impressive that it’s possible.
Sören Laird Sörries - Digitale Souveränität und Made in Europe
These days, we rely more and more on digital services and many of the big players that we rely on most are based in the US. This is sort of like having a single point of failure in a software system. Thus, there is a movement to migrate one’s online services to equivalent ones in the EU. It seems that it’s possible to migrate to “100% made in Europe” for software services; however, it seems practically impossible in the case of hardware. One other reason for such a movement is to reduce the fallout from enshittification coming from large tech companies.
Sören then discussed various categories of services and mentioned providers that one could use in the EU. Services include: Payments, Cloud, Email, Chat, Office suites, Maps/Navigation, Search, Social media, Music, Video conferencing, and Translation. It turns out that there is a wealth of available services in the EU to choose from that have the same functionality as the large tech companies. The alternatives often seem to be based in Germany, France, the Netherlands or Scandinavia.
Salve J. Nilsen (sjn) - What might a CPAN Steward organization look like?
The EU’s Cyber Resilience Act has far-reaching consequences for manufacturers and their products. Basically, that means that any product bearing the CE label requires the manufacturer to be liable for the security of their products, including any open source components in the dependency tree. The documentation has to be up to date and correct, and the metadata has to be complete and not misleading. They also need to be able to respond to risk assessments and maintain compliance.
Some products can have tens of thousands of dependencies, most of which are open source, and hence manufacturers have to show that they have done due diligence in ensuring their products and the software that their products depend on are secure and standards-compliant. This is a huge task, hence Salve has come up with the idea of a community-owned non-profit steward cooperative. The idea being that projects can come under this umbrella, and manufacturers can pay for a kind of time-limited certificate showing compliance for the Perl-based software that they’re using. This takes a lot of the work off the shoulders of manufacturers to show that their software is compliant and could potentially funnel a lot of money into the open source Perl community, which could then be used to give maintainers a living wage, fund conferences and meetups such as the Perl Toolchain Summit. The steward is there to support projects and try to sustain them over time.
It’s definitely an interesting idea and would be great if something like that could work. If one considers the amount of money some companies make from software that they can use for free, then funnelling even a small part of their profits back to the open source maintainers would be awesome.
Day 3
Harald Jörg (haj) - Talking to PipeWire
Following on from Harald’s talk from last year’s workshop (Sound from Scratch) in which he generated sound directly from Perl, he wanted to detect things such as when a microphone is plugged in or when a sound was played by an external program. For this task, he used PipeWire, which
provides a low-latency, graph-based processing engine on top of audio and video devices that can be used to support the use cases currently handled by both PulseAudio and JACK.
Of course, he wanted to be able to use PipeWire from Perl, hence he needed
to interact with the C-based library somehow. He tried out the API
tutorial, then Inline::C, then h2xs, but found that it wasn’t very easy
and eventually landed on FFI:Platypus as the method which worked best in
order for Perl to communicate with PipeWire.
Because XS isn’t so easy to use, and because Dave Mitchell wants to rewrite the XS tutorial, Harald wanted to wait until Dave has finished with the tutorial before trying out XS again.
With FFI::Platypus, one can build C interfaces without XS. It’s possible
to use this module to make a connection to PipeWire, but some C code was
necessary to get everything to go properly. Harald found that some
functions were defined as static, which meant that FFI couldn’t find them
automatically. It turns out that there’s a workaround for this issue, which
lets everything work. Thus, one can make static functions and macros
available to Perl via self-written wrappers. However, if there are lots of
structs, then there is a lot of manual translation of C declarations
necessary in order for everything to work, and it turns out that there are
several constructs in PipeWire which can’t be mapped to FFI::Platypus.
To work around these issues, Harald used Convert::Binary::C, which
operates on C header files and sources (ironically, not on binaries). CBC
gave insight into the C constructs, which then helped create Perl classes
equivalent to the C structs and might also help with creating XS typemaps.
He ran into a bug when trying to play sound with threaded Perl, which caused a segfault. It turns out that the issue doesn’t arise when using non-threaded Perl. Since most Linux distributions provide a threaded Perl, it’s necessary to build Perl yourself (which itself builds a non-threaded Perl by default) to work around the problem.
Harald managed to get a lot of what he wanted to achieve to work, but he
won’t be able to get much further with his initial goals. To get as far as
he did, it was necessary to use a current, stable Linux version, a
non-threaded Perl and a recent GCC version to get everything to compile.
The CBC config is rather fragile and difficult to get just right.
He found that FFI::Platypus was easy to learn, and it works with libraries
from various languages. However, it needs declarations for every function
call and every data type, which is a lot of work. It didn’t work out of the
box with the libraries he tried so far, yet there are workarounds, but they
are tedious. He also found Convert::Binary::C to be a good companion to
FFI::Platypus or XS, but it was not a replacement. CBC allows
introspection and code generation, which one can then use with FFI or XS
wrappers. Unfortunately, it doesn’t support GCC extensions, which some
libraries use extensively. Also, its configuration can be tedious and
brittle.
I liked Harald’s overview of what’s possible with XS and FFI::Platypus for
getting Perl to work with libraries from other projects. I have only really
played with XS, and it’s interesting to see what other options are available
and what the plus and minus points of each are.
Herbert Breunung (lichtkind) - Raku Grammars
Herbert started his talk by explaining a bit of the background of the Raku language, its community and ecosystem. Then he spent some time showing examples of Perl and Raku code and contrasting the differences in behaviour. This was presented as a kind of quiz to see how well-known things from Perl work in Raku. Then he started to introduce grammars in Raku and showed that it’s very easy to generate a parser for a given grammar definition. Unfortunately, due to time constraints, he wasn’t able to finish discussing all the things he’d planned to discuss.
Raja Renga Bashyam - Perl’s T20 to Test Match Moments of Fibenis: Adaptive system evolving on Natural Lang. Principles
Raja travelled all the way from India to give a talk at the workshop. I found that very impressive! The workshop has become much more international, with participants coming from England, the Netherlands, Austria, Switzerland, the US, as well as, of course, Germany. But wow, to have a visitor from so far away made an impression.
Raja works at the company Webstars Codegram Informatics, which creates the Fibenis tool. His talk contrasted various aspects of Perl with the game of cricket. As someone originally from a Commonwealth country, I could completely relate to the idea. Interestingly enough, I left the Commonwealth before the T20 variant of the game became popular; I’m much more familiar with the 1-day and test match (5-day) cricket variants.
One can compare some roles in cricket with roles that Perl takes. For instance:
- Bowling: the delivery maps to shell and automation
- Batting: the score maps to text and regex
- Fielding: the defence maps DB abstraction
- Keeping: the catch maps to debug and pre-compiling
However, some players in cricket can perform many roles. These are called “allrounders”, and Raja presented the case that Perl is an allrounder.
In cricket, there are three main variations on the game: T20, which is fast-paced and lasts only three hours; one-day matches, which have 50 overs for each side, and take roughly one day to play (actually about 7 hours); lastly, there are test matches, which take 5 days of roughly 7-8 hours each to play. When comparing cricket to Perl, we can make the association that T20 is like one-liners: short and fast. Then one-day matches are like mini tools and utilities. And finally, test matches are like large-scale applications. Thus, Perl is an all-format all-rounder, i.e. can play any game variation and can fulfil many roles.
Raja then spent the rest of the talk describing how his company uses ideas and patterns from Perl to create flexible web-based applications for their customer’s needs. Interestingly enough, the example code that he showed was in PHP. I asked him about this afterwards, and he said that yes, some of the code had moved to PHP, but the basic ideas that had been taken from Perl were still there. He also mentioned that his company plans to open-source its software in the future.
Paul Cochrane (ptc) - Getting FIT in Perl
Next came my talk, where I described the Geo::FIT module, which one can use
to parse Garmin FIT files. I discussed examples of what data is available
in such files and what one could (potentially) do with that data. There
were a couple of questions at the end of the talk, so I get the feeling that
it went down ok. One thing that probably didn’t come across in the demo at
the end of the talk was that the file I analysed and displayed data from was
taken on one of the stages of my trip from Hannover to Berlin. The bike
trip was thus in a way a requirement for the talk so that I had fresh data
to play with.
The slides of my talk are available on the talks page.
Julien Fiegehenn (simbabque) - Turning humans into SysAdmins (without having to be one first)
Many of Julien’s recent talks have been about how to turn young developers into Perl programmers (or software developers in general). Because he’s been through the German apprenticeship system and now lives and works in the UK, he’s made a modified version of that system for his company so that it works within the British context.
This time, Julien was presented with a difficult problem: how to create a similar career development and progression program for sysadmins? This is especially challenging as a trainer if one isn’t a sysadmin oneself. Since none of the sysadmins at work had the time or desire to train newcomers, it fell on him as a trainer to come up with a solution. His strategy involved a mix of human and AI information to come up with such a training program. He interviewed the sysadmins at work to find out what their day-to-day work involved. Then he fed the transcripts of the interviews as well as the structure of the German software developer apprenticeship program into Google Gemini to iteratively create teaching materials for new sysadmins. With lots of iteration and plenty of review by actual humans (that was an important part of the process), he was able to create teaching materials, sysadmin guidelines, as well as a clear training progression path for new sysadmins. It was necessary to go through everything page by page and chapter by chapter to really understand everything, not only at a high level. In the end, he wrote most of it himself, yet the AI was useful in getting the initial structure and content in place. The documents were then taken to the sysadmin team and discussed with them so that they could give feedback and improve everything. In the end, it seemed like the project was a success, so hopefully, Julien will be turning humans into sysadmins in the near future.
Thomas Klausner (domm) - Deploying Perl apps using Podman, make & gitlab
domm had given talks in the past about deploying Perl apps using Docker, GitLab and Kubernetes, as well as using Podman and Ansible for Perl app deployment. This talk was another variation on this theme.
Building and deploying applications takes place in the CI pipelines within
GitLab. He’d developed this particular pattern in the context of his
company, Validad, where he has lots of Perl backends running in containers.
Using Podman simplifies deployment because it’s much like Docker, and one
can use podman-compose, which is like docker-compose to build and run
container-based applications. Also, since all his backends are small and
run in one stage on one node, there’s no need for fancy autoscaling, hence
one can avoid, e.g. Kubernetes. He uses systemd to start and stop
services, uses gopass for secrets management, and most of the processes
are coordinated via Makefiles, which call Ansible playbooks to do the
actual deployment.
Using containers simplifies application development, and he gave some advice on how to use containers well, e.g.:
- One should use multi-stage builds to reduce image size.
- There should be one service per container.
- There’s no need for
local::liborperlbrewinside a container. - One can use well-known paths within containers instead of setting them dynamically in a configuration file.
- One can use volume mounts to get files from a node to the container and then share these between containers.
- One should pass the environment config to the container via environment
vars, or as an
envfile.
I found his presentation of Podman interesting because I’ve only ever used
Docker myself, and finding out about this alternative was good information to
learn. Podman calls itself “the best free and open source container tools”.
Some of the advantages of Podman are that it can run rootless, doesn’t need
a daemon (which Docker does), and one can run a container as a given user
(Docker usually runs the container as root, which can be a security issue).
The docker-compose equivalent isn’t as good as what’s available in Docker,
and it uses a Containerfile instead of a Dockerfile, but it sounds to be
much the same as what’s in Docker. Podman can create a pod, which is a
collection of containers, all sharing a network.
As mentioned above, domm uses systemd to run user-specific services and
finds that it works well together with Podman.
Application deployment is triggered with a special git tag with a specific
deployment name. Thus, not each push gets built, tested, and its
container deployed. This saves on resources, and the devs have direct
control of when an application is deployed. Deployment is then handled by
special make targets.
In the Makefiles, there’s a lot of intelligence built into the make
targets to create the relevant deployment tags and push new code. One can
use, e.g. make release stage=beta to deploy an application to the staging
environment (the testing environment is called alpha, however, prod is
still called production).
Lots of work has also gone into configuring GitLab-CI so that it works the
way he wants. He makes use of child pipelines for various subtasks and
avoids putting all steps within a CI pipeline; all steps are wrapped behind
appropriate make targets, thus running things in CI is often as simple as
running make. With the setup used here, he’s also able to run the
deployment locally by using the same make commands and hence without
needing GitLab-CI. This turns out to be very handy for testing deployment
process changes.
One thing about this whole setup is that I’ve created similar processes at my previous job and in other projects, so it’s interesting to see familiar patterns appear in someone else’s workflow. As they say: great minds think alike!
Getting home
And with that, the conference was over. I went back to the hotel and did a lot of packing, unfortunately, not being able to squash everything back into my bags as compactly as I did on the way here. I had good fortune in that it didn’t rain on the way home!
That evening, a few of us met up at an Italian restaurant, which was very helpful for me, because I needed to carbo-load for the following two days. After a yummy meal, and more interesting discussions with familiar faces, I headed back to the hotel to try to get some sleep before setting off early the next morning.
The following two days went well; there wasn’t much wind, and on the second day, it came a little bit from behind me, which was nice. After many muesli bars and packets of nuts and raisins, I arrived home safely. Next year, I also plan to ride my bike to the workshop, but it won’t be as far as this year.
Many thanks to the organisers, sponsors and participants for a fun workshop!
GPW2027
Talking about next year’s workshop, Julien and I put ourselves forward to organise the workshop in Hannover in 2027. We’d been throwing the idea around for about the last 9 months and decided to take the plunge.
See you next year in Hannover!