[{"content":"Prologue No excuses, I took a long time to write this post. Post-hackathon fatigue can hit hard, and describing what we managed to achieve during those 24 hours is no small challenge, because there were many attempts and many different approaches. But now, looking through the train window on my way from Suwalki to Poznan, I can feel the writing flow taking over, just like a Windows update on a random Tuesday at 12:40.\nWriting flow aside, the reality we had to face on site was way less poetic.\nImagine a table with 64 million rows. I know that is hard to picture, so here is some help: 64 million rows in Times New Roman is about 1,300,000 A4 pages.\nNow imagine reading those 1,300,000 pages and then predicting energy consumption from them. Not exactly easy. So as we all know, for this kind of challenge the first thing we usually reach for is decision trees. We did the same at first. But after a few hours we decided to do something completely different and used a model that was originally designed for almost the opposite kind of task, and only recently started being adapted to many other domains. Come along if you want to see a forest of regression trees first, and then I will tell you how that one crazy experiment brought us 1st place out of 45 teams in this task, and why sometimes it is worth throwing the safe instruction manual out the window.\nA short intro to the EnsembleAI hackathon format To understand the emotions my team and I felt during this fierce battle, we need to start with the hackathon format, because it is at least unusual and gives dopamine hits stronger than Instagram Reels. Each of the 4 tasks is scored separately, and points are assigned based on submitted solutions specific to each task. In task 3, which I worked on, that was for example a CSV file with predictions of monthly energy consumption for a given time interval. Because of this setup, the leaderboard page was the central place of the hackathon, where each position in a task translated into points. Submissions could be sent only at predefined intervals, among other reasons to avoid DDoS-ing the servers. So after every upload there was always a tense waiting period: did our solution improve the ranking, and by how much? But maybe from the beginning: what, how, where, and why? The task was defined by one of the hackathon partners, Euros Energy, which also provided the data. So what was it about? In the problem statement, we got a clear picture of how mass electrification is a milestone for Poland\u0026rsquo;s energy transition. But for energy distributors, the fast growth in heat pumps creates major challenges. That is why accurate demand forecasting is essential to prevent grid overloads and, as a result, failures.\nThe data we got When we talk about machine learning and prediction, it would be a shame not to start with the data, so let us do exactly that. Each team had access to 3 main datasets:\nTrain: October 2024 - April 2025 Validation: May 2025 - June 2025 Test: July 2025 - October 2025 We made predictions on that last dataset for every submission, but here comes the twist that decided everything. It was the familiar Kaggle mechanism: Public vs Private Leaderboard. The Test set was technically available to everyone, but\u0026hellip; it did not include our \u0026ldquo;y\u0026rdquo; target. So there was no way to retrain on it or verify results on our own.\nFor the full 24 hours, we were fighting \u0026ldquo;in the dark,\u0026rdquo; seeing results only for a small slice of the data on the board. But those points did not carry the final weight in the overall ranking. The final score deciding the podium was computed on the remaining, fully hidden part of Test, and nobody knew those results until the very end. That made the last minutes of the hackathon pure emotional lottery, because summer behavior could be very different from the autumn-winter period we mostly trained on.\nIn practice, the evaluation looked like this:\nScore Months used Weights Leaderboard (visible) Validation only (May - Jun 2025) - Final score Validation + Test (May - Oct 2025) 2/6 valid + 4/6 test In short: in the end we had around 600 sensors sending logs every 5 minutes in the periods above, which gave us around 64 million rows (10.42 GB!) to analyze.\nGoal Short and simple: the prediction target was not instantaneous power, but the monthly average value of the grid load indicator (x2) for each device. So we moved from high-resolution data (readings every 5 minutes) to monthly aggregates. Below is the exact formula from the task description:\nFor each device d and forecast month m, we needed to predict the average x2 value across all 5-minute readings in that month:\ntargetd,m = (1 / Nd,m) * \u0026sum; x2(d,m,i) And the metric on both the live and final leaderboard was MAE:\nMAE = (1 / n) * \u0026sum; | yi - ŷi | So, time to describe our efforts and the road that took us straight to 3rd place in the whole hackathon.\nFeature engineering and data preprocessing At the start, of course, we had to inspect the data and distributions closely, and that is what I did. But even before that, at the very end of the organizer instructions, we found this section:\nAt that point I thought we should start there and add information for each sensor about which energy distributor it belongs to. Surely every team would do that, right? Right?? Well, in the end it turned out they did not :D and who knows, maybe that gave us those extra points.\nThe data included latitude and longitude for every sensor, so based on that I decided to locate each device in a specific voivodeship by querying the GeoPy API. It turned out the data was probably anonymized or contained errors, because some locations were incorrect and GeoPy could not find the right place. In those cases, we used KNN to find the nearest sensor with valid coordinates. Then a mapping assigned each voivodeship to one of the distributors such as PGE, Enea, or Tauron, and that gave us our first interesting feature. Another important aspect was data aggregation. There was a lot of data, enough to overwhelm many models, so we chose hourly aggregation. It seemed to significantly reduce dataset size, remove noise from 5-minute logs, create room for pattern detection, and still remain a useful prediction unit.\nOverall, the problem was quite interesting because at first I approached it as a time-series prediction task. But after deeper thought, this is really a plain regression problem. Sure, measurements come every 5 minutes, but the target is MONTHLY! That is a strong aggregation, and as my university professor would say: we clearly need the sharpest axe possible for this prediction, not a scalpel. Plus, a fairly universal axe that can connect important features in autumn and then apply those insights in summer too.\nFirst approach My first approach was CatBoost. We had some categorical and numerical features, so I decided boosting trees could fit this world quite well. So we went full speed with CatBoost and the following hyperparameters (without tuning at that point):\n1 2 3 4 5 6 7 8 9 CatBoostRegressor( iterations=800, learning_rate=0.05, depth=6, loss_function=\u0026#34;MAE\u0026#34;, cat_features=CATEGORICAL_FEATURES, random_seed=42, verbose=100, ) And as they say: boom. It hit hard, because our first model got 0.0074 MAE. 0.0074!!!! That is really tiny\u0026hellip; especially with monthly aggregation and this data profile.\nThen came a barrage of feature-engineering rounds, exploration, and trial-and-error. In the end, while fighting other teams that reached similar results and eventually overtook us, our last CatBoost step was Optuna to squeeze as much as possible out of it. We got MAE = 0.0044. Every model iteration was a real battle, and I still think getting that value from a tree model alone was a strong result. Especially because, slight spoiler, Transformer is a much heavier architecture, so it is hard to compare the two directly since they sit at opposite ends of efficiency and compute requirements. Still, I consider that result really good given our knowledge and skills.\nAutobots, roll out When did we abandon our beautiful tree? First, when I felt that further changes, attempts, and feature engineering were no longer moving the needle, or moved it too little to climb higher. Second, when a team literally called \u0026ldquo;Transformers\u0026rdquo; beat us and, in a way, inspired us. After a short research phase, I decided to bring truly heavy artillery: Feature Tokenizer Transformer. It is a relatively fresh architecture that has recently become more and more popular in Kaggle competitions.\nGeneral idea and mechanism of Feature Tokenizer Transformer The description below is based on the paper that introduced FT-Transformer. The images also come from the same source.\nFrom the top: in our dataset, and in tabular datasets in general, we mostly deal with two types of features: categorical and numerical.\nAs we know, Transformers were widely used in NLP in generative models like GPT, or encoder-decoder models like T5. So how do we force this architecture to process not token embeddings this time, but categories and numbers together?\nMain component: Feature Tokenizer This is exactly what the Feature Tokenizer does. It is the key gem of this approach, and it works in two specific ways:\nNumerical features: relatively straightforward -\u0026gt; we take a scalar, multiply it by a learned weight vector with embedding-size length, add bias, and that scalar gets stretched into an embedding of the target size.\nCategorical features: similar to NLP token handling. Each feature value is first transformed into a one-hot encoding representation, then multiplied by a weight matrix. In short math terms, this is selecting a specific row from that matrix plus, of course, bias.\nOne-hot encoding means changing a categorical value into a binary vector. Sounds weird, but it is simple. Example: we have a feature \u0026ldquo;Color\u0026rdquo; in a motorcycle dataset. Suppose there are two colors: red and black. In vector form, that is [Red, Black], so red is first position, black second. The one-hot representation is like turning lights on, so red is [1,0], black is [0,1].\nAll feature values are concatenated into a large matrix T. Then on top of it we append a randomly initialized [CLS] vector with the same length. Next, the entire matrix is processed and passed into the Transformer, so T represents one row in our table (including that extra [CLS] vector). Diagram below:\nBut why [CLS]? CLS stands for Classification, and the main role of this vector is gathering information across all layers during the forward pass.\nThen, as you can see, our T vector with processed features goes into the Transformer, passes normalization, and then goes to Multi-Head Self-Attention. This layer lets the model discover the context needed to get a result closest to ideal. In our case, context means other columns in the table, so values from matrix T. That context is what, among other things, gets accumulated in [CLS].\nAnd why Multi-Head? Similar to language models where one head can capture grammar and another emotion, here each head looks for a different context in our data row. That means one head can track hard geographic dependencies (for example, consumption vs voivodeship/operator), another can search for hidden technical relations (pump model vs consumption), and [CLS] receives a full multidimensional picture instead of one averaged mush.\nFinally, we discard all other rows from matrix T except [CLS], which carries the core information needed for downstream processing (in our case, predicting specific consumption), and that goes straight into classification/regression.\nThat is the extended short version of how the whole thing works under the hood.\nApplying FT-Transformer in our task Final feature engineering During those 24 hours I tested many feature ideas, often asking an LLM if it had interesting suggestions. So here is what we added and finally used to train our Transformer, though some of these features were also used for CatBoost.\ndeviceType helps the model capture differences in operating behavior.\nx3 is an additional categorical feature carrying information about heating curve type.\noperator lets the model account for differences from operating conditions and policies.\nvoivodeship adds geographic context affecting climate and system seasonality.\ndevice_operator_combo captures interactions specific to a given device-operator pair.\nt1_mean-t13_mean is the average value of signals t1-t13 in a time window, describing typical level.\nt8_max is the maximum of t8, describing extreme peaks and high-load episodes.\nt8_std is the standard deviation of t8, measuring signal variability.\nt7_max is the maximum of t7, indicating short extreme system states.\nt4_min is the minimum of t4, useful for detecting deep drops.\ndelta_load is the change in load over time points, capturing system dynamics.\ndelta_source is the change on the source side, potentially reflecting switches or power condition jumps.\ncwu_demand is DHW demand, directly affecting system operation.\ndelta_temp_out_in is output-input temperature difference, describing energy transfer and process efficiency.\ncwu_spike is a flag for sudden DHW demand increase, useful for short and abrupt events.\nhour_sin is sine of hour-of-day, encoding cyclic time without artificial jump between 23:00 and 00:00.\nhour_cos is cosine of hour-of-day, complementing the above and reconstructing full daily phase.\nmonth_sin is sine of month, representing yearly seasonality continuously.\nmonth_cos is cosine of month, closing cyclical season representation together with month_sin.\nUnder the hood: network, head, and hyperparameters Theory is theory, but now let us move to how we adapted these Transformer blocks to our dataset.\nIn theory, numbers are linearly projected by learned vectors. But we went one step further: each numerical feature was first processed before entering Transformer by a small neural network, namely MLP (Multi Layer Perceptron):\n1 nn.Sequential( nn.Linear(1, embed_dim // 2), nn.ReLU(), nn.Linear(embed_dim // 2, embed_dim), ) We did this because not all features influence the result linearly, so we injected some nonlinearity before Transformer input.\nCategorical features were embedded in the standard way described above. The only addition was OOV slots (Out of Vocabulary), in case an operator or deviceType was unseen. What happens next is the classic Feature Tokenizer Transformer described earlier. Hyperparameters we used:\nEmbedding size: 64 Multi head attentions: 8 Transformer layers: 3 Dropout: 0.1 After data passes all Transformer layers, we reach the final part, the regression head. The idea is simple: from the whole matrix we extract only the specific [CLS] vector mentioned earlier. Why this one? Because thanks to attention, it has absorbed information from all other columns and carries a condensed representation of the full row.\nThe remaining vectors (for example region-related) are simply cut off because they already did their job. Our [CLS] goes into a tiny neural head made of normalization layer and ReLU activation, which finally compresses all those complex numbers into one final value.\nAt the very end, we also added a hard safety guard. Since we predict energy consumption, negative values make no physical sense, so we clipped everything below zero to prevent nonsense outputs.\nTraining phase A few words about how we approached model training overall. We wanted to do it efficiently, without pointless Transformer training and without wasting precious hackathon time. We had two main phases:\nPhase 1, the test ground Instead of training on everything, we made a hard time cut at the beginning of February. The model trained on data before that date and then predicted the future, what happened after February 1. Why date split and not random? Because for energy consumption, random split would cause data leakage, meaning the model would see the \u0026ldquo;future\u0026rdquo; to predict the \u0026ldquo;past.\u0026rdquo; In this phase we also added Early Stopping so training stopped when improvement stalled. Of course, we saved all checkpoints. This phase gave us realistic MAE before submitting anything to organizers.\nPhase 2, full speed ahead After many tests in Phase 1 confirmed architecture stability, we moved to Phase 2 -\u0026gt; more data = better model. At the end we removed the February 1 cutoff and fed all available historical training data. This heavily fed and tuned model generated final predictions that went into our final submission file.\nSmall tip at the end It is worth mentioning that the Transformer learned a scaled mean average x2 value using StandardScaler. Neural networks generally like normalized values, so this likely added another brick to more stable and efficient FT-Transformer training. Right before saving predicted values to output, predictions were properly inverse-scaled to target values.\nEpilogue So why could this work, and now we can say it did work? It is hard to say anything with 100% certainty, because large and complex neural networks are still kind of black boxes. Surely each of the listed practices helped a bit. But if I had to pick one thing with bigger impact, I would point to the famous Multi-Head Self-Attention mechanism. The main challenge in this data was extracting universal knowledge from autumn-winter months, when heat pumps typically run at high load, and transferring that knowledge to summer consumption, when usage is much lower. In FT-Transformer, the context mechanism could model how strongly features affect output and how much specific attributes should be considered in special cases. On top of that, our nonlinear MLP that processed numerical values could enrich these features and assign more meaningful influence. As we know, Transformers can generalize well, and I believe that was the first violin in this task. Still, credit goes to the teams right behind us. Even though the second team had a worse result than ours (by over 50%), we were probably the only team that pulled out such heavy artillery as Transformer for this task. Other teams used tree regressors like LightGBM, and considering the complexity gap between our architecture and theirs, they did a really great job. Still, we managed to take the lead, and we can be proud of our solution.\nSo\u0026hellip; next year? Another EnsembleAI and another time I had an amazing experience. Huge thanks to the organizers for such a great event and to my DNS team (Team of Missing Szymon), in this lineup:\nJakub Hudziak Jakub Binkowski Maciej Kaszkowiak Maciej Mazur and of course me :D We brought the fire, guys, and I hope not for the last time. I may be repeating myself, but I mean it every single time. So, see you next year?\n","date":"2026-04-07T16:00:00Z","image":"https://blog.huszcza.dev/p/ensemble-ai-2026/cover_hu0e514019a1e8ec17548c39a03053cd2b_1242696_120x120_fill_q75_box_smart1.jpeg","permalink":"https://blog.huszcza.dev/p/ensemble-ai-2026/","title":"Transformers Cutting Down Trees - EnsembleAI 2026"},{"content":"Notes vs. Code: How 13 Years of Musical Education Made Me a Better Dev and More Lately, I’ve realized more and more that the greatest return on investment (ROI) I experience daily is my 13 years of primary and secondary music school. What’s even more incredible is how that invested time influences almost EVERY area of my life. But first, let me tell you a bit about what those dozen or so years actually looked like.\n13 Years in the Suwałki Music School: My Foundation At the very beginning, I want to say that I was lucky enough to work with the best educator and\u0026hellip; friend for all those years. My trumpet teacher, Mr. Paweł Misiukiewicz, spent over a decade instilling in me not just the knowledge of every sharp and flat in every scale (the horror! D:), but also how to be determined, responsible, and mature in a way that far exceeded my age and the circumstances outside of school.\nBesides demanding those traits, I’d be lying if I said he was the only challenge. The one playing \u0026ldquo;second fiddle\u0026rdquo; as a teacher was the trumpet itself. In my opinion, it’s the most versatile, beautiful-sounding instrument, and I wouldn’t trade it for any other (it\u0026rsquo;s the best, obviously), but\u0026hellip; it is ungrateful in its own way. I used to curse it, but today I see that all these benefits are the result of \u0026lsquo;brute force\u0026rsquo; that this instrument simply forced upon me.\nFirstly, when playing the trumpet, you play with your lips—which means a small but very real muscle—so consistency is perhaps the most important thing (technical aspects aside). It always annoyed me that pianists might practice longer statistically, but if they took a week off during the summer, it was no tragedy. The trumpet? 3 days off == 10 days of getting back into shape—that was the formula Mr. Paweł gave me, and it’s actually true.\nConsistency: 3 Days Off == 10 Days of Recovery And so emerges the first trait I practiced for 13 years and try to train today, even though I spend much less time with the trumpet – Consistency. When I set a goal, working toward it step-by-step is no problem at all; it’s hardcoded into me like Arch Linux in a typical Redditor. I don\u0026rsquo;t need to explain that whether it\u0026rsquo;s studies, work, the gym, or ANYTHING, life requires consistency. That’s how I’ve achieved things completely unrelated to music—like regular gym workouts, polishing a language, or consistent reading. I know someone might think: \u0026ldquo;Buddy, plenty of people work out without needing a decade of music school XD.\u0026rdquo; I agree, but I’m talking about the naturalness of it—the fact that it feels ordinary and, I dare say, relatively simple.\nThe next trait of the trumpet is also closely linked to the first. We can already see a pattern: the trumpet requires time and consistency, and therefore, excellent planning. If I had to name something I like about myself, it’s that I’m a really good planner, because for 13 years, this was a daily reality—a requirement to move forward.\nTechnical Exams and \u0026ldquo;Making Music\u0026rdquo;: Plan or Fail Exams in music school often consist of a specific program. It looks different for every instrument, but on the trumpet at an intermediate level (secondary school), we have two exams. One is technical, focusing—as the name suggests—on the mechanics of playing: execution, difficult \u0026ldquo;notes,\u0026rdquo; large intervals, demanding tempos, and endurance tests. The second exam focused on performing a longer piece with accompaniment, usually piano. I always had the pleasure of playing with the best and most reliable Mrs. Danuta Long. I hope neither Mr. Paweł nor Mrs. Danuta will mind me mentioning them by name, but it’s impossible not to speak directly about whose contribution I value and what made me look back at those years of work with a smile.\nThis second exam was more about \u0026ldquo;making music.\u0026rdquo; That’s a broad term, but I mean performing a piece in a way that is correct, objectively beautiful, and true to the composer\u0026rsquo;s intentions. Add to that syncing up with the accompaniment, which required teamwork rather than just individual effort like the first type.\nI’ve sketched out the requirements—you can see it’s no walk in the park. An exam means many hours spent practicing solo with sheet music or with the accompanist. And then there were the regular exams, various concerts with the orchestra, and more (I had the chance to play in different ensembles, the school orchestra, the city band, and many side projects, like caroling with Suwałki’s pro volleyball players).\nFor every event, there was a deadline and organized rehearsals. For each one, you had to prepare bit by bit, focusing on what was working the least at that moment. Every etude, every piece—literally everything that passed \u0026ldquo;through my fingers\u0026rdquo;—had a purpose: whether it was technical drills or those two specific lines I struggled with most. The process was simple: I always had to divide the material to finish with a buffer, and during execution, identify what was going worst and focus on those parts.\nOh, and I almost forgot: most exams required memorizing the material. I don\u0026rsquo;t need to say much here—without ego, I can state my memory is sharp as a blade, limited only by the time I have to learn and how many things I’m doing at once. I’ve recently realized my memory does have limits, but that’s a topic for another post.\nHonestly, these things accompanied me from a seven-year-old to a—as Mr. Paweł would say—\u0026ldquo;stary koń\u0026rdquo; (translation: an imprecise Polish term for someone old enough to be held accountable; age undefined, in this case, around 20). How are these any different from my current requirements at work? I get a \u0026ldquo;ticket\u0026rdquo; on Jira, I have to plan what needs to be done, what I’ll do first, set a deadline, and identify what requires the most work and heavy testing.\nDebugging Stress: How to Beat Stage Fright? It’s easy to imagine that not just planning, but stress was an inseparable part of exams and performances. I can admit I’m someone who gets more stressed than others, even when I feel well-prepared. However, the fact that it required more effort from me meant that I got to know myself very well. I’m lucky enough that I feel stress practically only before a performance; the moment I start acting, it all vanishes.\nStressful situations hit us daily: at university before a major project, at work during a presentation to the whole team. I know myself well. I know I need to keep my mind busy beforehand, and I have a few techniques that help me tolerate that stress and not let it dominate me—and most importantly, not let it turn into fear or force me to quit.\nA Lesson in Humility: \u0026ldquo;Look the Part Even if You Don\u0026rsquo;t Play the Part\u0026rdquo; While writing this post, I realize everything is connected. Like pulling a thread, I notice with every tug that I discover something new. Failures and expectations were my biggest struggle. Sometimes I hated the trumpet for that: when you mess up and \u0026ldquo;clams\u0026rdquo; (missed notes), everyone knows. It’s not like a squeak on a violin (maybe it isn\u0026rsquo;t, but that\u0026rsquo;s how I imagine it); the trumpet, with its volume and grandeur, announces to the world: \u0026ldquo;This guy just made a mistake\u0026rdquo;—so everyone can hear it.\nI still remember an orchestra concert and my not-so-successful entrance that everyone knew about (or maybe I just thought they did? I\u0026rsquo;ll never know). I also remember the conductor\u0026rsquo;s comment, Mr. Czarek (warmest regards—I believe discipline and professionalism were always the foundation of a good ensemble, and playing in that orchestra truly felt that way): \u0026ldquo;Benek is buying everyone a beer.\u0026rdquo; Nothing more needed to be said. I also remember a brass quartet concert where a performance was seen by half the school because SOMEONE made sure to record and share it.\nSo, there were plenty of failures. I should also mention that the trumpet requires a great deal of self-confidence. If you play quietly to avoid sticking out, you only increase the chance of a mistake. Only at the end of my education did I settle things with myself and decide to just play—to not be afraid and not think that a mistake would be more audible. The paradox is that by trying to hide it, you only increase the chances of it happening.\nLooking at it all: you can see that dealing with failure and having a \u0026ldquo;tough psyche\u0026rdquo;—and by that, I mean coming to terms with yourself, accepting the mistake, and saying: \u0026ldquo;Okay, let\u0026rsquo;s move on, I\u0026rsquo;ll try harder next time\u0026rdquo;—was crucial. Moreover, after an \u0026ldquo;unsuccessful\u0026rdquo; concert, a dramatic escape from the stage or showing everyone how sad I was wasn\u0026rsquo;t allowed. I still remember the last terrible note played on a disastrous quartet concert day: taking the trumpet from my lips, standing up with a wide smile, bowing, and leaving the stage. Only then did I drop the act.\nWell, as we say: \u0026ldquo;Co się nie dogra, to się dowygląda\u0026rdquo; (what you don\u0026rsquo;t play right, you make up for with how you look)\u0026hellip; And it’s not just about appearance or puting on a \u0026ldquo;brave face\u0026rdquo; (literally!). It’s more about not allowing yourself to show weakness on stage, not falling apart while the ball is still in play. Simply accepting the state of things but maintaining professionalism until the very end, even if it went badly. As I write this, I think of a captain going down with his ship. It’s similar: you don\u0026rsquo;t give yourself space for emotions in that specific moment; you hold yourself together. I approach mistakes the same way today: failure isn\u0026rsquo;t making a mistake; it\u0026rsquo;s failing to draw conclusions and making it a second time in exactly the same way.\nFinally, let me add the biggest challenge that crowned my history in music school—the diploma concert. I don\u0026rsquo;t know if this sounds like a small thing to some, but for me, it was the biggest event of my life so far. A diploma concert is about an hour-long, open solo recital (from a trumpeter\u0026rsquo;s perspective) to which you invite family and friends (so if you mess up, it\u0026rsquo;s in front of those you care about most). I had the pleasure—for which I am grateful to Mr. Paweł—of having a free hand regarding the repertoire. I spent some time researching, but finally performed pieces I truly liked, and working on them was pure pleasure (Centennial Horizon and Sonata by Jean Hubeau).\nI remember the final year very well. I was already in my first year of Computer Science, traveling every two weeks from Poznań to Suwałki to rehearse with Mr. Paweł. We practiced mostly the diploma pieces, and I loved it because the rehearsals were in a large concert hall, and we focused mostly on \u0026ldquo;making music\u0026rdquo;—making the performance as good as I was capable of. And I think I nailed that concert. Why \u0026ldquo;I think\u0026rdquo;? Because the stress was so high I remember very little of it, but I know one thing: I delivered. I didn\u0026rsquo;t let down my teacher, my family, and most importantly, myself. I graduated with the highest grade (celujący), and in my head, it stands as one of my greatest life achievements. Every time I face a major, stressful event, I remind myself of this: through sheer work and hundreds of hours, I delivered something much heavier and bigger. So why wouldn\u0026rsquo;t I handle this now?\nAnd that’s how I’ll end this lengthy post. I didn\u0026rsquo;t write about the obvious things like ensemble playing, as that feels natural—the whole orchestra is in sync often down to milliseconds and hertz. I also didn\u0026rsquo;t write about how much I appreciate music thanks to the school, its craftsmanship, and the emotions it evokes in me. How the sight of an orchestra sometimes truly grabs my heart.\nCoda: Thanks to those who are still with me today I’ve never had the chance to share my thoughts on school, music, and this whole story. So, from this place, I would like to thank:\nMr. Paweł – a friend, a teacher not just of music, but of life; Mrs. Danuta Long – the best accompanist and ally when I needed an \u0026ldquo;advocate\u0026rdquo; with Mr. Paweł; Mr. Grzegorz – who involved me in his ambitious projects and trips (I still remember them today!), even though I didn\u0026rsquo;t always play everything perfectly; Mr. Czarek – the orchestra conductor (and neighbor!), of which I was a proud member and where I felt like a professional; and all my friends – we played many a gig, went on many trips, and shared many great moments together. And as they say: last but not least – thank you to my parents. They are the ones who pushed me into the 1st degree, drove me hundreds of miles to lessons, waited patiently, and took the news when I screwed something up. But I thank them even more for NOT PUSHING me into the 2nd degree. I feel that was the most important part -\u0026gt; it was My decision (with a capital \u0026ldquo;M\u0026rdquo;), and that sense of agency gave me the greatest boost of empowerment.\nI look back at this time with sentiment, and I won\u0026rsquo;t lie—I try to stay involved in the artistic sphere in my free time, whether it\u0026rsquo;s theater or playing \u0026ldquo;Centennial Horizon\u0026rdquo; for myself again, because I feel it is a huge and inseparable part of who I am.\nI don\u0026rsquo;t have a recording of my own performance, but if anyone is interested and would like to hear my repertoire, I encourage you to embark on this absorbing journey: Thanks for reading this post. Maybe you’ll start learning an instrument, or encourage someone who’s considering it? Until next time!\n","date":"2026-02-07T10:00:00Z","image":"https://blog.huszcza.dev/p/music-vs-it/cover_hu617634170b26205250057699274c3efc_1252253_120x120_fill_box_smart1_3.png","permalink":"https://blog.huszcza.dev/p/music-vs-it/","title":"Notes vs. Code: How 13 Years of Musical Education Made Me a Better Dev and More"},{"content":"MLA-C01: How I Passed the Machine Learning Engineer - Associate Exam The Beginning I recently managed to successfully pass my first cloud-related certification. I landed on the AWS Machine Learning Engineer - Associate, and honestly, deciding to take this specific exam was a pretty crazy choice :D. Why? Because I previously knew AWS architecture at a very basic level, which I had the chance to learn while creating my engineering thesis in collaboration with OLX. Back then, I had the opportunity to use services like Cognito, S3, Translate, and a few models from Bedrock. I knew literally nothing about concepts like VPC or IAM, which is, of course, pretty natural, as I simply didn\u0026rsquo;t need them while developing my project.\nBut getting back to it, why THIS specific certificate? Well, an announcement popped up at work that you could take part in a course, and completing it would get you a voucher for the exam. At that exact moment, I was taking a subject at university closely related to concepts like MLOps (Machine Learning Operations), which showed me just how much I enjoy this topic—in fact, it fascinates me (shout-out to Janek, our instructor ;)). And just like that, I didn\u0026rsquo;t hesitate for long. I filled out the form in a few seconds and, in the end, found myself on the list of course participants.\nI think the course was quite valuable. I don\u0026rsquo;t remember the exact topics covered in the lecture part, but every meeting ended with labs where we could mess around with, among other things, SageMaker (the platform for building, training, and deploying models). I found this valuable from a practical standpoint, and, you know, processing and running models on SageMaker is pretty expensive, so I hadn\u0026rsquo;t had many opportunities (a big fat zero) to use it before. After the course, I received the voucher, and this is where my rather long road to the exam began.\nThe Preparation I want to start by saying that I consider the first semester of my Artificial Intelligence specialization to be one of the toughest of my entire studies: there were a ton of projects, the exams weren\u0026rsquo;t the easiest, and all of this was accompanied by a 3/4 time job. In one word: a wild ride. That\u0026rsquo;s why, finishing the course in April, I decided to postpone the exam date to June—back then, it seemed to make a lot of sense. Without further ado, let\u0026rsquo;s jump to June.\nIn June, after finishing the exam session (with no retakes), I realized I had absolutely no energy after work (now full-time for the summer) to sit down and cram for the exam, so I pushed my studying to July.\nAnd in July, you know how summer is\u0026hellip; ah, gotcha ;) In July, I really did start preparing. And since we\u0026rsquo;re talking about preparations, I\u0026rsquo;m ending my dear diary format and switching to a more substantive tone, so that everyone who came here for the \u0026ldquo;meat\u0026rdquo;—i.e., how to pass the MLA-C01 exam—will know how I did it.\nHow I Studied for the Exam The most important discovery, and something that helped me immensely, was the AWSCertifications group on Reddit. In short: it\u0026rsquo;s a goldmine of people who are taking, have passed, or are studying for their AWS exams. You can find literally everything on the topic there, from notes to advice on how to answer and analyze questions. New posts appear there daily, so I recommend reading through the experiences of the many people who have already walked this same path.\nAfter reading, I had my first conclusion: I had to start with Stephane Maarek\u0026rsquo;s courses on Udemy. And so I did. I watched about 24 hours of lectures covering the entire scope of the exam, from data processing to deploying models on SageMaker. At first, I tried to take my own notes, and that would have been valuable, but I\u0026rsquo;ll point out that I knew nothing, and everything I heard was new to me. What\u0026rsquo;s more, it wouldn\u0026rsquo;t have taken 24 hours, but probably twice that. After a few sections, I stopped and just decided to absorb the knowledge as it came and understand as much as I could. So I recommend this step to everyone. I listened to these lectures not only by watching them intently but also sometimes while making dinner, and I feel like even in that format, something stuck in my head.\nIt\u0026rsquo;s worth noting (this is something I learned from Reddit) that some sections, like data processing, Stephane Maarek took from other, more advanced exams. Sometimes the knowledge is very detailed, maybe even too detailed for the Associate level (there are 4 main levels of AWS exams, from easiest to hardest with the most detail: Practitioner, Associate, Specialty, Professional). Nevertheless, after taking the exam, I think it\u0026rsquo;s better to know more than less, because some questions were indeed very detailed. Additionally, there\u0026rsquo;s a segment on popular architectures where the Transformer architecture is discussed—you can definitely skip that, and if you\u0026rsquo;re in a hurry, I recommend it. If not, it\u0026rsquo;s always good to broaden your horizons.\nAfter the lectures, I took another course from Stephane Maarek - Practice Exams. Three full practice exams that are genuinely difficult. Fresh off watching the lectures, I scored: 66%, 67%, and 70% respectively. And to pass the exam, you need 72%. Even though the results weren\u0026rsquo;t the best, it was from these practice exams that I learned and remembered the most. Why? Because I guess that\u0026rsquo;s just how I work. Even though they were still multiple-choice questions, they have a very practical feel. You analyze what the best solution is for a given \u0026ldquo;story,\u0026rdquo; e.g.: Company A wants to host a model, they want to do it as cheaply as possible, but they must account for sudden, unpredictable spikes in their application\u0026rsquo;s traffic—what should they use? And with each question, you analyze and learn which AWS solution is best for the job, what its specifics are, and, of course, its cost.\nI liked this format, and even though it was quite hard at first, it was much better for me than listening to dry lectures (although I still recommend doing those). I also recommend paying attention to the purely Machine Learning aspects: regularization, overfitting, metrics, or even quantization and fine-tuning, like LoRa.\nAfter finishing those tests, I still didn\u0026rsquo;t feel confident, and since learning from my mistakes proved to be effective for me, I decided to look for more tests. And this is where Tutorials Dojo comes in—a nice refresher, but the tests are significantly easier than the real exam and Maarek\u0026rsquo;s tests. There were a few interesting and unconventional questions, but I think they are worth doing more for the sake of exploring knowledge and learning other service use cases than for actual preparation for the questions on the original exam.\nNevertheless, I would still do them, just in a different order.\nMy recommended order:\nMaarek\u0026rsquo;s Lectures (as a solid knowledge base) Tutorials Dojo Questions (for reinforcement and exploring use cases) Maarek\u0026rsquo;s Exams (as the final boss and the real combat test) After doing the TD (Tutorials Dojo) exams, I went back to Maarek\u0026rsquo;s exams (about 2 months had passed since I first did them, you know, vacation and stuff\u0026hellip;) and on the second attempt, I scored: 81%, 86%, and 81%. During this second pass, I took comprehensive notes, both on the questions I got wrong and on the correct ones. Because sometimes you just feel the answer but aren\u0026rsquo;t 100% sure—it\u0026rsquo;s worth analyzing those too, to understand them and gain certainty in your choice.\nAfter that, I scheduled my exam. For the few days leading up to it, I only reviewed Maarek\u0026rsquo;s full presentation and my own notes. I had an extra 30-minute accommodation for the exam because I\u0026rsquo;m not a native English speaker, and I won\u0026rsquo;t lie, it came in pretty handy. During my first pass through the questions, I didn\u0026rsquo;t waste much time and just marked the answer that I felt and understood the most. This allowed me to get through two full passes in the allotted time. The second pass was much calmer, analyzing each question heavily, which allowed me to rethink my answers and ultimately change 7-8 of them after I discovered a few keywords in both the answers and the questions. And so, about 11 hours after the exam, I got a message that I passed with a score of 818, and I\u0026rsquo;m quite proud of it (just as I am of the badge I received from AWS). I\u0026rsquo;d summarize the exam itself in one word: tricky. Many answers seem correct but have one little word that totally changes their meaning. It was the same with the questions. So, I recommend decomposing the questions in your head into their components, figuring out what the exact requirements are, what you want to achieve, and what to focus on.\nLLMs in the Learning Process I think it\u0026rsquo;s also worth mentioning how I used AI while studying. On the aforementioned Reddit, there are a ton of threads on this topic, and as they say, \u0026ldquo;opinions are divided.\u0026rdquo;\nIf I had to summarize how I used AI in one sentence, it would be the word: Why?\nIf a practice test had an explanation, but I felt it wasn\u0026rsquo;t enough, I would go to Gemini or ChatGPT with that exact question. At first, I double-checked if their answers matched the AWS documentation, but after a while, I stopped, because the answers were correct time after time. I think it\u0026rsquo;s great to use these tools in exactly this way—for illustrating, describing, and explaining concepts better. For showing use cases or giving simple examples.\nWhat\u0026rsquo;s important is that I never once used LLMs to just give me the correct answer to a question. I preferred to choose the most logical option myself (even if it was wrong) and only afterward ask the AI for an explanation. And this method worked brilliantly for me.\nI recommend Benedykt Huszcza\nWhat Now? To everyone who is studying, I wish you good luck! For those who didn\u0026rsquo;t pass, I hope my path helps you conquer the exam on the second try. And to everyone else—thanks for reading this post! If you have any questions, write to me via email or on my socials. I\u0026rsquo;m happy to answer questions and help as best I can. Below, I\u0026rsquo;m dropping a section for people who just want the short version of how I studied. Now, as always, the question remains: what will occupy my mind next\u0026hellip;?\nHow I Passed MLA-C01 (TL;DR Version) Started on Reddit: I browsed AWSCertifications. Great stuff for seeing how others studied and what tips they have. Stephane Maarek\u0026rsquo;s Lectures (Udemy): First, I watched the entire course (~24h). At this stage, I was mainly absorbing knowledge to get the big picture. Tutorials Dojo Tests: I did these third, but I recommend doing them second. They were good for learning more service use cases, but I felt they were easier than the real exam. Stephane Maarek\u0026rsquo;s Practice Exams (Udemy): This was the \u0026ldquo;final boss\u0026rdquo; for me and the most important stage. These tests are really hard (my first scores were 66-70%), and I learned the most from them. My Key to Success: While doing the tests, I took very detailed notes. I analyzed not only the questions I got wrong but also the ones I got right (especially if it was just a \u0026ldquo;gut feeling\u0026rdquo;). The goal was to truly understand why a specific option was the best. This allowed me to gain knowledge, remember it, and understand the concepts. Recommended Order: Maarek\u0026rsquo;s Lectures -\u0026gt; Tutorials Dojo Tests -\u0026gt; Maarek\u0026rsquo;s Tests + taking in-depth notes. Before the Exam: I reviewed only my own notes and the slides from Maarek\u0026rsquo;s course. On the Exam: I read the questions very carefully. They are \u0026ldquo;tricky.\u0026rdquo; I broke them down in my head by requirements, looked for keywords, and did two full passes through all the questions. ","date":"2025-11-03T12:00:00Z","image":"https://blog.huszcza.dev/p/aws-mla-c01-en/cover_huda35e8d3581351945c7eae22c1f222ff_1066935_120x120_fill_box_smart1_3.png","permalink":"https://blog.huszcza.dev/p/aws-mla-c01-en/","title":"MLA-C01, or How I Passed the Machine Learning Engineer - Associate Exam"},{"content":"What is Benedykt-the-Dev? Benedykt-the-Dev is my personal website where I share my portfolio, skills, and blog posts about my experiences and interests. I built this project using Next.js along with a few smaller libraries, which I\u0026rsquo;ll talk about in a moment. The blog itself is powered by Hugo, ensuring fast performance and great SEO optimization.\nI have to admit, my inspiration for this project came from Maciej. I always read his posts with curiosity, and they ultimately motivated me to create my own site.\nHonestly, a year ago, I wouldn\u0026rsquo;t have guessed that I\u0026rsquo;d be writing a blog post about my own website. I hope that a year from now, I\u0026rsquo;ll look back with the same sense of reflection – but this time about something completely different that I can\u0026rsquo;t even imagine right now.\nWhat\u0026rsquo;s on the website? Portfolio – A showcase of my key projects with links to related blog posts, my career journey, interests, and a way to get in touch. Blog – A place where I share my thoughts, tutorials, and experiences from my life as a developer. Tech Stack Next.js – A frontend framework that works great with SEO and ensures fast performance thanks to server-side rendering (SSR). It also makes it easy to add new features, like a contact form, in the future. Framer Motion – My first experience with this animation library. It’s very intuitive and helps minimize unnecessary code. React Typist – I used this tool to create a typing effect on the screen. It’s lightweight and easy to implement – exactly what I needed. Tailwind CSS – A utility-first CSS framework that allows for consistent and modern styling without the need for separate CSS files. I love how it lets me write components and styles in one place. Hugo – The blog is powered by Hugo, which guarantees fast loading times and excellent SEO optimization. At first, I ambitiously planned to write everything from scratch, including the backend and CMS. However, I ultimately chose Hugo, which allowed me to start publishing blog posts much faster. This choice was also inspired by Maciej, who recommended this solution. What\u0026rsquo;s next? This project was a great opportunity to showcase my skills and share knowledge about web development, AI, and other topics that interest me. I’m planning to expand the site with a section about my hobbies and education.\nBut let’s be real – the next few months are going to be pretty intense, because in just a week, I’m starting a new part-time job (3/4 time) and Master\u0026rsquo;s degree in AI. Pressure? Sure, but they say it’s a privilege – so I’m planning to make the most of it. One thing’s for sure: boredom is not on the agenda!\nWant to check out the source code or learn more? Take a look at the repository on GitHub.\n","date":"2025-02-24T16:00:00Z","image":"https://blog.huszcza.dev/p/benedykt-the-dev/cover_hu5459c0360c2b0cb7a147d2df0eb350ca_2043236_120x120_fill_q75_box_smart1.jpg","permalink":"https://blog.huszcza.dev/p/benedykt-the-dev/","title":"Benedykt-the-Dev – Portfolio and Programming Blog"},{"content":"Our Engineering Project – Is This the Future of Marketing? This project was developed by a four-person team as our engineering thesis. It was a huge challenge for us, but at the same time, an incredible adventure that allowed us to combine creativity with modern technology.\nAs I write this post, I feel proud of what we’ve accomplished. On the other hand, there are some limitations, which I\u0026rsquo;ll explain in a moment. But first, let’s talk about what we managed to create and why we believe it could be one of the future directions for marketing.\nWhat is this engineering project about? In short – together with OLX, we set out to create a tool that allows companies to quickly and easily generate advertising banners without the need to hire a team of graphic designers. This is made possible by using artificial intelligence algorithms that automate the design process.\nJust think about it – how many times have you seen repetitive, boring ads that passed by unnoticed? Or how many companies give up on advertising campaigns due to the cost of hiring professional designers? These are exactly the problems we aimed to solve by creating a banner generator platform.\nHow does it work? Without getting too technical – the system uses generative AI solutions to create unique and visually appealing banners based on user preferences. The user can choose from three different types of banners, add their logo, set dominant colors, and even generate a catchy slogan. The whole process takes just a few minutes!\nWhy can\u0026rsquo;t I reveal everything\u0026hellip; yet And now we get to the most interesting (and somewhat frustrating) part. At this stage, I can’t go into technical details or show exactly how the system works. Why? Simply because the copyright situation is still unclear. The project has sparked interest and\u0026hellip; that’s all I can say for now, as I don’t want to jinx anything.\nWhat\u0026rsquo;s next? This post isn’t overly detailed, but I plan to expand it in the future with technical insights. I’d like to explain how we managed to integrate AI with a seemingly creative process like graphic design. For now, I need to be patient and wait to see how things develop regarding the potential acquisition of the project.\nOne thing is certain – this engineering project was a challenge but also an opportunity for growth and learning new tools. Regardless of what the future holds, I am already proud of what we have achieved. And if everything goes as planned\u0026hellip; who knows, maybe one day you’ll see the results of our work on OLX banners?\n","date":"2025-02-24T02:00:00Z","image":"https://blog.huszcza.dev/p/ad-banner-generator/cover_hu5459c0360c2b0cb7a147d2df0eb350ca_697511_120x120_fill_q75_box_smart1.jpg","permalink":"https://blog.huszcza.dev/p/ad-banner-generator/","title":"Engineering Thesis – Project Developed with OLX"},{"content":"Introduction This project was developed as part of the course Medical Informatics. Except for a hackathon (by the way, I highly recommend reading Maciej\u0026rsquo;s post about this event), this was my first serious encounter with libraries such as PyTorch and OpenCV.\nSince the task turned out to be quite challenging, it forced me to dive deep into research on various image processing methods. I explored literally everything – from the simplest filters to more advanced computer vision techniques. As a result, I learned the fundamental methods used in this field, significantly broadening my knowledge.\nI won\u0026rsquo;t lie – it was tough at times, especially when noise in the images ruined hours of coding. Nevertheless, the vision of using technology to analyze medical images inspired me and kept me going. There were moments of frustration when things didn\u0026rsquo;t work as expected, but the satisfaction of a working solution definitely made up for all the struggles.\nK-Nearest Neighbors (KNN) – Blood Vessel Classification What is K-Nearest Neighbors? K-Nearest Neighbors (KNN) is one of the simplest and most intuitive machine learning algorithms. It operates on the assumption that similar data points are close to each other in feature space. In short – if you want to know the class of a new point, check the class of its nearest neighbors.\nIn the case of Vessel Extraction, KNN was used for:\nClassifying pixels as \u0026ldquo;blood vessels\u0026rdquo; or \u0026ldquo;background,\u0026rdquo; Analyzing pixel neighborhoods to better distinguish vessels from noise. Undersampling – How Did I Deal with Imbalanced Data? What Were the Challenges? Overwhelming amount of background data – Areas without blood vessels (background) dominated the images, causing the model to learn to recognize mainly the background and not the vessels.\nUnderrepresentation of blood vessels – Pixels belonging to blood vessels made up less than 10% of all data, leading to model overfitting.\nHow Did I Handle It? I decided to use undersampling – intentionally reducing the number of background samples so that the number of vessel and background pixels was more balanced. Sounds simple, but it required a few thoughtful steps:\nSelecting Background Samples:\nI didn\u0026rsquo;t randomly discard background data, as this could lead to a loss of important contextual information. I focused on representative samples, specifically those located near blood vessels. This gave the model better learning context. Reducing Background Samples:\nI ultimately reduced the number of background samples by about 70%, resulting in a more balanced ratio of vessel to background data. It was crucial not to overdo it – I had to leave enough background to prevent the model from confusing it with vessels. Preserving Local Patterns:\nBy using 3x3 pixel patches, the model retained local patterns, which improved accuracy. How Did KNN Work in This Project? Feature Extraction:\nEach pixel was described by its brightness value and the values of neighboring pixels. This provided the model with more information about the local context. Choosing the Number of Neighbors (k):\nThe key parameter in KNN is k – the number of nearest neighbors whose class is considered for classification. I conducted cross-validation to find the optimal value for k. The best results were achieved with k = 5, ensuring a balance between accuracy and recall. Classification:\nFor each pixel, the classes of its k nearest neighbors were checked. The pixel was assigned to the class with the most representatives in its neighborhood. Results and Performance Accuracy: 89% – pretty good for a simple model without deep learning! Recall: 85% – effectively detected blood vessels but sometimes confused them with thin background lines. Precision: 91% – the model successfully avoided false positives (mistaking the background for vessels). I know, at first glance, the results don\u0026rsquo;t look impressive, and it\u0026rsquo;s hard to believe I got such good \u0026ldquo;numbers\u0026rdquo; (i.e., accuracy). But here\u0026rsquo;s the trick – it\u0026rsquo;s all about the chosen approach.\nI used 3x3 pixel patches because smaller fragments make it easier for the model to detect local patterns characteristic of blood vessels. The total image size was 512x512 pixels, so if the classifier recognized a 3x3 patch as a vessel, all 9 pixels in that patch were completely filled in white.\nThis approach meant the model was more confident in its decisions, which positively impacted accuracy and Dice score.\nFastAi – Deep Learning for Blood Vessel Classification Why FastAi? After testing the classic KNN approach, I decided to take it up a notch and use FastAi – a framework built on PyTorch that is excellent for rapid prototyping of deep learning models. FastAi provides:\nEasy integration with pre-trained models (e.g., ResNet), A simple API that speeds up data preparation and model training, Advanced optimization techniques (e.g., learning rate finder). How Did FastAi Work in This Project? Data Preparation:\nImages were divided into smaller patches to help the models learn patterns more effectively. I used FastAi DataBlock API for efficient data management and labeling. Classification was performed on two levels: Blood vessels, Background. Deep Learning Model:\nI chose ResNet34 – lightweight but powerful enough for vessel recognition. I used transfer learning with pre-trained weights (ImageNet), which sped up training. Fine-tuning the last layers helped tailor the model to the specific task of vessel recognition. Results and Performance Accuracy: 92% – a clear improvement compared to KNN. Recall: 90% – the model effectively recognized vessels, even in challenging cases. Precision: 94% – very few false positives, resulting in highly accurate vessel detection. First Step into Machine Learning This was my first individual project in machine learning and computer vision, and it was an incredible learning experience. I understood how powerful image processing techniques are and how to handle imbalanced data with undersampling. Although there were many challenges and frustrations, I got hooked on machine learning. Experimenting with data, testing models, and optimizing algorithms turned out to be truly exciting.\nI realize that I have a lot more to learn – from advanced neural network architectures to GPU optimization – but I\u0026rsquo;m excited to continue this journey. If you want to check out the source code or learn more, visit the repository on GitHub. Who knows, maybe this project will inspire you to start your own adventure with AI?\n","date":"2025-02-23T18:00:00Z","image":"https://blog.huszcza.dev/p/vessel-extraction/cover_hu5459c0360c2b0cb7a147d2df0eb350ca_4078906_120x120_fill_q75_box_smart1.jpg","permalink":"https://blog.huszcza.dev/p/vessel-extraction/","title":"Vessel Extraction – Image Processing Using Python and OpenCV"},{"content":"Where Did the Idea Come From? I am lucky to have my sister Róża, who studies at the Medical University of Białystok. Moreover, not only does she study there, but she also runs the university magazine – Młody Medyk. For about two years, the student organization under Róża\u0026rsquo;s leadership has been organizing speed dating events. And this is exactly where the problem arose, which the future Doctor brought to me. As a mature developer, I decided to roll up my sleeves and solve it.\nWhat is SpeedDatingMatcher? SpeedDatingMatcher is an event management system specifically designed for adding participants, their preferences (willingness to contact after the event), and sending emails with contact information to selected people. The application handles email communication through integration with Brevo. The first edition was deployed on Microsoft Azure, but honestly, I wasn\u0026rsquo;t fully aware of all the SSR properties, and the whole endeavor ended with considerable frustration with Azure, although I eventually managed to deploy the system on it.\nIn this year\u0026rsquo;s edition of the application (2025), I opted for a more civilized approach: VPS with Docker and nginx. I also had access to a Free Tier AWS EC2 server, but since I was planning to create this blog in the near future, I decided to purchase server access right away.\nFeatures The main task of the application was not only sending emails but also automatically matching people participating in the event. That is: if a person with number 3 wanted to contact numbers 2 and 1, then before sending the email, I needed to check if both person 2 and 1 also wanted to contact number 3. Only if there is a mutual preference, the email is sent. This situation is illustrated by the following graphic:\nIt is clearly visible that the email is sent only when a cycle of length 2 is found in the graph.\nUltimately, the application enables:\nManaging speed dating events – Comprehensive management of participants and meetings. Email validation – Ensuring that all provided email addresses are correct. Integration with Brevo – Automatic sending of notifications and reminders to participants. Additionally, thanks to Django, it\u0026rsquo;s easy to create and manage users.\nTech Stack Django – Backend and database management. Next.js – Fast frontend with server-side rendering capabilities. Brevo – Integration for sending emails. BeautifulSoup4 – Dynamic modification of HTML email templates. Microsoft Azure – Cloud hosting for the first version. Docker – Containerization of solutions. Nginx – Reverse proxy on the VPS server. Summary Honestly, this was my first serious project realized outside of work. I would be lying if I said that planning the architecture, selecting technologies, and writing the solution didn\u0026rsquo;t give me immense joy and didn\u0026rsquo;t awaken my developer\u0026rsquo;s soul.\nThanks to this project, I had the opportunity to test my ideas and also get familiar with SSR, Django, and Azure.\nI co-created the project with Maciek, which provided an opportunity to manage tasks, divide them, and conduct mutual Code Review.\nWant to learn more? Check out the repository on GitHub.\n","date":"2025-02-23T17:00:00Z","image":"https://blog.huszcza.dev/p/speeddating-matcher/cover_hucb6842c766b35d342dbb95cacf542a9b_2241605_120x120_fill_box_smart1_3.png","permalink":"https://blog.huszcza.dev/p/speeddating-matcher/","title":"SpeedDatingMatcher – Event Management with Django and Next.js"},{"content":"","date":"0001-01-01T00:00:00Z","permalink":"https://blog.huszcza.dev/p/","title":""}]