Really interesting blog post about #generativeAI. The author shares his opinions and fears. I found it interesting how he uses the #LLMs for code generation. Basically, he asks LMMs to critique his understanding of the topic and ask for better keywords for his research.
Today I read a #study about assessing performance of LLMs and humans for customer service like jobs. It was easy to read. Quite interesting and had valid points. I like how they calculated helpfulness metrics based on turns to answer and then calculated final score from 0 to 100. The whole research is based on questions which can be summarized as a set of problems usually found on #StackOverflow. The conclusion is that current #LLMs are already capable of solving forum like questions better than humans. Pretty appealing and shows that in the future, help desk jobs probably will get more and more obsolete. What I found remarkable is that average scores achieved without #RAG are similar to human responses is similar enough to conclude not much difference. But enough good things about this research. Now some bad things. First of all, there are no examples of communication between human - human and human - llms. It’s super important to put such things in a research paper. Without this, you can hide so many details about the entire evaluation process. Second of all, what were the questions? There are no examples of them. Who came up with the questions? I can surely come up with a set of questions which LLMs will solve perfectly and humans would not do an other way around. This is crucial for research to have. I mentioned this since the conclusion is based on metrics which can be easily manipulated by researchers to prove their thesis. If you make research in a field which is really blurry like #LLMs, you need to me explicit on datasets you use for evaluation. At least that’s my opinion.
Simple #script which allows you to deploy an application to remote #host. It uses #rsync and Go as an application builder, but you can quickly change it for whatever purpose you need. The author claims that it’s production ready, but I would say it is if your production-ready application does not need to be #reliable and #scallable. Either way, I would still love to use it for my small web services running in various VPS.
I took a break of two weeks. No IT news, no reading. I focused on having fun and chilling out. I climbed a volcano in Bali, snorkeled, drank in local pubs, and enjoyed various local foods
Now it’s time to go back to weekly reads. I’ll start next week!
I don’t use LLMs a lot, but I read about how others do. This article clearly summarizes some tools which you might want to use if you aren’t concerned about ethical usage of copyrights ;) Tools for coding, reading, mind mapping, searching. All of this in one article that also has examples of how the author uses it. #LLM #tools #AI
“LLM Detectors Still Fall Short of Real World: Case of LLM-Generated Short News-Like Posts,” n.d. https://arxiv.org/pdf/2409.03291
tldr; #LLMs content detection is far from being solved How to detect content generated by LLMs? Is it even possible to do it reliably? Researchers ask this question for very long time already. What I learned from this paper: - There are already solutions to use but not very reliable: Fast-DetectGPT, GPTZero, BERT-based detectors, - Benchmarking detectors is hard. Evasion strategies are complex and not really applied during previous research. - We don’t have a silver bullet for the problem, and probably it won’t ever exist. To conclude, if people who provide solutions like #ChatGPT or #Gemini won’t invest in high-end detectors #internet will extinct.
Apparently, people use #Godot for app development and it seems super easy. When I look at the scene maker it all reminds me a bit of #Qt framework where you work with layout and signals separately. Also, author suggests that you can use preview of the app you write on #android with "one click deployment". I would love to write my apps on #Linux and port it to android with one click. Guess I have to learn Godot then!
How to classify #nsfw content and distinguish it from #art? Can you do it with free classifications? Is there any bias in the current open models? All of this can be found in following research. To conclude, it’s possible to do it but still not very accurately. Also, there are biases in training datasets and models.
#Lizard apparently said: "I think individual creators or publishers tend to overestimate the value of their specific content in the grand scheme of [#AI training]. […] We pay for content when it’s valuable to people. We’re just not going to pay for content when it’s not valuable to people." Then don’t fucking use it. I did the same with #facebook and boy. My life is soo much better now.
Interesting approach to generate #geometrical #datasets for LLMs. It’s a small #study which concluded a system that can generate various images that have correct description. I really like how simple the idea is and how beneficial it can be. Not only for future generation of training data, but to teach LLMs how to describe #images with this new #syntax. No harm was made, #LLM is trained and validated. What a time to be alive!
Interesting article on how to create infinite loops or responses with #LLMs. The author shares a couple of scenarios where human would easily give up on conversation but due to the nature of LLMs the infinite loop of responses cannot be easily prevented. If you ever get a Turing test this is a way to get through it.
I really enjoy reading unbiased opinions on #LLM. There are plenty of people biased, just like me. The ones who do not use LLM from principle of believing it was created by #stealing content. Still, the #technology seems to be useful and really helpful for #programmers. This blog post describes different situations in which #GO programmers used LLMs to get helpful responses. Apparently, it can be helpful, and probably I’m missing a lot while not using it.
I tried to recreate one of the effects found on awwwards website. I think it went pretty good. One thing which is not there is velocity while moving the tiles.
“What’s New in C26 (Part 1),” n.d. https://mariusbancila.ro/blog/2024/09/06/whats-new-in-c26-part-1/
Apparently they added new features to C26 - you can specify the reason for function deletion, not sure why it’s part of the method semantic and not some comment or note - placeholder variables with no name And two more which I don’t understand anymore. Yes, new C++ looks even more messy than 5 years ago.
As a beginner artist, I have a huge issue with drawing details. When I isolate certain smaller parts of the painting, it starts to look better, but still not good enough. Apparently, there is a way to make it more methodical. 1. Get some model to work on like a photo reference 2. Draw small rough sketches two by three inches 3. Select one which has is simply the best 4. From the pose of your sketch take the most important details and draw them separately, focussing on important details. 5. Remove the photo and work only with those sketches you made I really like this concept as it allows you to focus on creative painting instead of copying the model.
#AI #upscaling algorithms and their cumulative distortion shown on a video. I really like this example as it shows something which your AI CEO is probably not aware of. If you don’t store AI augmented content like images, videos, text from #LLM separately, soon the emails your employees will send start to look really weird. The same goes for internet content like blog posts. The more AI you put into it, the worse future AI will get. Not only because content quality will decrease, but also due to cumulative error distributed by AI developers not aware which training data was previously artificially generated. I really like this blog post. Informative and short.
I’m still looking for this one perfect tool as a future artist to focus on. #watercolor is really nice, but every so often I want to use tools which do not require cleanup afterward. That’s why I bought myself some cheap #oilpastels and started to draw with them without understanding that you can actually blend them … This video allowed me to explore the topic of color #blending with #pastels and definitely I’ll start exploring this topic further. #art
Interesting blog post on how to deploy #microservices in a company more efficiently. The whole idea is to have one dedicated team which automates various tasks like #library changes, update, #architecture changes. It’s an interesting idea, but I wonder if it’s not too much work for one team. Also, this only works when you have a codebase made of one language like #Go, #Java or #Rust. From a company perspective, it seems logical to limit drift of languages as it’s much easier to maintain such a setup when employees change jobs. Also, you need to use some central way of code storage like #monorepo architecture. In general, I like the idea, but I’m pretty sure the limitations are huge.
Reminder for every developer out there to not create complex boolean expressions and rather split them into separated variables. #programming #cleancode
10 easy ways to #write your text clearer. I really liked the: - if your #sentence has two commas it’s probably too long - rewrite your first draft with one rule: no significant #words from the first draft are allowed in the second draft Cool stuff!
Richard #Dawkins answering a bunch of questions about the future. I really liked the questions about #embriology and the future of human enhancements via artificial neuron receptors. The good remark about the whole embryology is the race between countries to bring it as far as possible. Sad, but it’s what’s happening to everything. Humanity doomed. What a time to be alive!
Fantastic blog post on how "#reverb" in sound works. If you never heard of it and have some interest in how to build one, this blog post will help you do it. The number of examples and animations on how reverb behaves for different setups is just insane! #music #C++
The author shows how you a use #bash and #eval function to metaprogram functions which they can reuse. It’s really interesting approach, but I would probably not do it this way and write code which is reusable by environment variables supplied to a set of functions. Basically, there would be active profile which I would load with ‘source‘ command and commands would not have their prefixes. Still amazing blog post. I love such creativity in the Linux community. It brings me positivity without toxicity in my city, in my ciiiiiiityyyyy!
Does your digital drawing look pixelated? The prints you make are always weirdly small or too big. There is actually some science behind that. The post describes how to set up canvas in your digital artwork tool. What is DPI and how to print your art. All of that with wonderful explanations.
“COVID-19 DETECTION BASED ON BLOOD TEST PARAMETERS USING VARIOUS ARTIFICIAL INTELLIGENCE METHODS,” n.d. https://arxiv.org/pdf/2404.02348
Small and interesting research about detecting COVID based on blood samples and X-ray of lungs. I’m not very knowledgeable about medical part of it, but I loved how in the end researchers used Grad-CAM to show why AI detected some lungs scans as ones with COVID. Really, you can see histogram on X-ray scan and it’s just wonderful application of Grad-CAM. #ai #lungs #covid #research
Interesting blog post about hot to use git notes. I was not aware that this is how you can store the comments section of git issue trackers. The whole idea seems pretty interesting, especially that you can search thought it. #git
Amazing blog post about Hacking PS2 and XBOX360 via THPS1,2,3,4,American Wasteland and Underground. The author managed to exploit RCE through park name exploit. #hacking
Shallow comparison of Postgres and Elasticsearch full-text search capabilities. No metrics were shown but some key takeaways. First of all, apparently Postgres is not that fast and does not have features as elastic search. I would love to check plugins for the database, as probably there are some. Second of all, Elasticsearch does not have ACID transactions so it cannot act as primary data store. #postgres #elastic
I haven’t updated this microblog for two weeks due to three weekend gigs and the PolAndRock festival. Every year I make this huge ass party, and this year was no exception. Amazing people, amazing music, and a polished vibe all around
Apparently, Odysee plans to make some changes to how they store creators content. Now they plan to use some blockchain, and it should allow for better flexibility between platforms. I would love to know more about it but the video they created is just cringe and there is not a single technical detail there. Still let’s see.
Short article about Java "keytool". It’s simple and shows how to use in different ways like generating certificates, keystores, deleting certs from keystores etc. #java #keytool
Interesting article which shows how bipartite graph can be used to strengthen connections between architecture like structures. #math #architecture #graph
Imagine getting personalized #AI generated spam, which includes your GitHub blog entries. Ehh… not sure what’s worse. Sending it or finding the sender being actually proud of their work …
Interesting blog post about FreeCAD files and how to store them in git. I really like such blog posts. I remember some time ago I read a similar one but on how to host huge encrypted .mov files on GitHub. :))
Interesting #paper on how to use LLMs for real-world tasks. This time, researchers used it to model letters made from clay. The output results are quite good for a first paper. #science
Omg what at genius project! The person behind this post describes how to use old rotary phone as VoIP device. Imagine sitting in a big cozy chair during winter evening. Snow outside and video of a cozy fireplace on your 50’ TV. Slowly putting down whiskey glass on the table, you take the rotary phone, put the number of your best lad and call they like a boss. "Yo BOI, Let’s play Quake II". There is no answer from the other side. Only notification that your lad just went online. #technology #oldtech
Apparently, there is already fully AI-generated music trending on Spotify. Such AI bands have AI-generated music videos and AI-generated responses on Twitter. This blog posts really nicely summarize how to detect such creations. I actually like how the author did a bit of research into it. I also agree that we should not fully ban AI from artists tooling. Maybe someone needs this cheap helping hand. We should not replace them, though.
Every time Elon Musk farts with his brain on Twitter there is a new wave of Mastodon users that gradually decline. I have seen 4 waves already and every time people go back to X like moths to light. And I do blame them. Like really. You are the people who make this world a shittier place. But I also know that we are all like this. We all compromise our values in favor of comfort. So this study describes the last migration of scientists which did not stay very long on Mastodon. There are various reasons for that. People seek centralized options that have better communities, support and an easier learning curve. Personally, I created my mastodon account a long time ago and only 1 year back I started to really use it. After that, I started to follow people who really interest me. Sadly, to do that I had to go through so much lgbtporn, fury porn, porn, gore, political abuse, general abuse, and it was all too finally create my feed of people who have some interesting content. So I mean. Mastodon is not accessible. Describing what instances are is hard. It’s like Linux. People will never understand it as their comfort zone is too tight. Don’t blame them. Let the world burn. Amen? Or should I say TOOOOT!
Apparently, researchers make ChatGPT roleplay for real-life companies like hospitals and game development studios. The video shows metrics, which apparently show that LLM improves predictions when it is role-playing for a longer time. I am still skeptical about it. Especially that ChatGPT shines so brightly in every study. This should make you think about what training data was used for its training. Lastly, the video has a pretty good comment section. One of the comments is: "The fact that Chat GPT makes better or more accurate answers when you have it simulate individual actors inside a larger organization is really fueling my paranoia; we’re all just brains in a jar."
Short article on why we don’t really use recent CSS features. The author puts many reasons why like lack of support for old browsers, improvements are not really visible on the screen but rather in the codebase. Personally I think there is one more reason. Whoever worked on maintenance of web project knows that touching style in one part of CSS can have a drastic impact on the whole application. Component tests are hard, almost none is writing them. We don’t use new features as the architecture of the application could be changed and suddenly issues can pop up.
Sam Altman, CEO of OpenAI deliberately cloned voice of Scarlet Johanson for his new tool. It’s even more fucked up. He contacted her and when she disagreed, they still used her voice generated by AI and claimed that it’s not actually hers. Can you even imagine how evil is that? When an artist disagree to contribute to your project, you simply steal her voice and pretend it never happened. This is the mentality of 5-year-old boy … I really hope OpenAI will lose all incoming lawsuits.
Is there a freeze in #AI advancements? The author shares his opinion on this and shares how in his opinion future can look like. Personally, I’m not sure anymore what is real and what’s not. AI is taking the jobs of creative designers and copyrighters. Everything for the money, which no one is paying taxes on. I think we need more time to grasp the future of AI, not even mentioning the future of humanity as it is.
Have you ever thought how to create a function which will have end goal of minimizing cost of vehicles swarm overtaking maneuvers? No? Me neither, but the shit is fascinating. Actually, this paper reminds me of linear programming courses I had on uni. I never full understood how to solve complex problems of LP, and the problem proposed here is complex AF. #science
Interesting post about the origin of Emoji. Apparently, devices from Japan had similar concepts in 1988. I really like how the author shown retro tech.
Not much time to read this week due to another long weekend. By the way next week it’s again long weekend in the Netherlands. It’s the last one and after it the next public holiday is in December
Short article on how M$ copilot improves productivity or rather how in the long term it does not. I promised myself that I’ll use copilot alternative when it’s trained on code which allows that, but I guess it will never happen. #ai #copilot #review
This man had the idea of making an email account for his baby and sending photos and texts to it. The access to the account would be secret until the kid is old enough! What a brilliant idea. I actually do the same with my diary now. Basically, my daily notes about life are kept GPG encrypted on one of the email accounts. #timecapsule #idea #gpg
Relatable, small rant about issues of today’s internet. I really liked the part “Your Gmail is approaching storage limit”. I always wondered how the hell I got 10 GB of pure text messages pollute the outbox.
I actually started a new blog where I’ll log my travels. I had been thinking about it for quite some time, but I never invested time in it
Finally, it’s time to change that. I have already been to so many places, and this year, because we plan to camp more, the travel blog seems like a good idea
Long blog post about CSS features which are worth knowing for 2024. I had no idea about most of them. I guess this full-stack development does not make me a better web developer. #css #frontend
Simple article which shows how basic diffusion models are built. Basically, you have two processes. The Forward process adds noise to the data until the original data is not distinguishable from the noise. The second is a backward process which starts from noise and then, step by step try to restore the data. I wonder if the same techniques apply for images. #diffusion #math
Do you feel stupid while learning new things? If so, this article is for you. I have a similar approach to the author while learning new things. Usually, I pick up a book about some topic and if this does not allow me to understand something, I gradually either re-read the same book or check other sources. Really interesting article. #reading #books #learning
There is a proposal of making "Mansory" layout as #CSS Grid extension. I read the article, and personally, I think this layout should be a separated layout and not an extension of CSS grid. CSS grid API is already unique and complex and should not be extended with new features. I think coding this #masonry grid in browser engines can be really painful. But I’m in favor of finally providing browser independent solution for masonry grid. I remember some time ago I had to make one and oh my god. It was super painful!
Interesting article tackling web content preservation. Author is using #playwright tool to create screenshots of their blog. Everything runs in GitHub pipeline and is tracked via git LFS. I really like the idea of own content preservation. Maybe I should set up something similar for other websites I read. Cool article!
A refreshing blog post about log levels and where to use them. Even though the author suggested that only two are needed, I disagree with this. afaik; log levels should make sense and toggle. You should add as many debug and info logs as you want, and simply strip them during the build process for production. This is the healthiest way that allows you to write good, maintainable and secure code. Nevertheless I like that the author shown different logging libraries in various languages. From what we can see, there is clearly a pattern of: "ERROR, WARN, INFO, DEBUG, TRACE" usually there is also "CRITICAL \textbar FATAL" which is good to keep in mind while developing your application. #logs #tracing
When I first read it, I though the author introduced some brain chip for mice but no! This mad lad actually made a mouse cursor controlled by a flute. Amazing project!
Another article about the benefits and drawbacks of LLMs. This time: - it’s good for mediocre jobs - it’s good at writing simple code - it’s good at spell and grammar checking - it’s bad for the environment - it’s stealing jobs - it’s making capitalist capitalize on poor countries Nothing really new but still a good read.
Interesting research about how students improve while using AI for code generation. Tldr; it seems to help, but also it creates issues Most importantly, it encourages student to read more about programming!
Short story of a person who used Emacs for more than 10 years and eventually gave up in favor of VS Code. Apparently, pair programming features of VS Code were too good to go back to Emacs. What can we take from it? I think the era of old editors like VIM, Emacs, NetBeans, Eclipse is already gone. New editors have super hard time competing with VS Code, which has massive community that build plugins. When you think about VS Code and its issues, there is only one. It comes with proprietary spyware from Microsoft. There are versions without it as Code is Open Source but then you cannot use its Marketplace.
So imagine me, 8:27, sun is shining, and slowly the whole house is waking up for another fucking day to survive. I made my coffee, sat down, took a sip and started to read the article which got my attention. Is usually don’t read clickbait articles, but this one got me due “Old Scam” mention. I love to read about old technology ideas that fail, fail and fail when the money is pumped, pumped and pumped regardless of how stupid the idea is. Humane AI concept of this AI pin that does not have screen is one of those ideas. What I was not aware of is the way big tech sharks decided to solve accessibility issue. And ooh god, what a solution they provided. So they mounted a laser screen projector on it and decided its good design to display data on your hand. Imagine you put this pin close to your nipple, try to find the right angle with your hand to display the data but due to sunlight, it makes it unreadable so either way you take out your smartphone and say "Google create me an Amazon auction for this garbage nipple pin". But now for real. The article about its issues is really good and shows different perspectives on the problems created by this stupid idea. Worth to read.
PostgreSQL optimizer researcher is comparing performance benefits of major #PostgreSQL versions in 10 years time window. On average with each version there is a 15% performance improvement so… Update your database!
Amazing, detailed and story-based article on "Etak". First electronic car navigation system. I already knew about some things about it, but this article allowed me to explore the whole technology more deeply. Here are some things I learned about it. The entire system was using multiple sensors to detect car position but none of them were based on GPS. For example, to detect how far the car drove and display it on the map, a special sensor was mounted within the wheel itself. Next, since the error of such sensors accumulated quickly, there was a need for something to still keep position right on the street. That’s where Map Matching comes in. I was not aware that the team behind Etak designed and implemented the first map matching system that later was sold commercially. The map was recorded on cassettes which had issues when left in the sun. Basically, the plastic was melting due to high heat at the back windshield. The team was aware of this and created many tests to ensure the cassettes were usable also during extreme heat waves. There are many more interesting facts about it which I won’t cover in this simple note. I highly recommend you to read the article as it’s extremely detailed and easy to read! #map #mapmatching #productdesign
Interesting article on how to grow #basil at home. There are plenty of tips on how to do it well, but for me, the best part was about the different basil types. I’ll summarize it here. - "Genovese" - typical basil you find at shops, big leafs and good amount of basil flavor. - "Italian large leaf" - another typical basil that you find at shop - "Greek" - tiny leafs and plant shaped in small globe like structure. It has more spicy flavor - "Limoncello" - basil that has more citrus flavor. It can be used for drinks, curries etc. - "Red Rubin" - basil which has bronzed, burgundy leafs. Apparently, it has pink flowers and sweet taste with notes of cinnamon. #gardening
Google made AI to generate compounds of new materials and in general, it shows promising results, but there are some issues after their paper got peer revived. Apparently, some researchers checked the materials created, and they do not fall in the category of what “new material” is. Apparently, in the science of materials, the new material needs to fall between three categories. "Utility, credibility and novelty". When researchers analyzed Google paper they noticed that vast majority do not fit in those categories. What is also worth noticing is that there are some materials which do not make sense at all. What can we take from it? First of all, this is the way we should use generative AI. It helps scientists and can lead to innovation. Second of all, such work need to be peer-reviewed before it goes to media. We all felt into trap of generative AI as a promising solution for human labor replacement, and such papers do not help this situation. We need more scientists and proper feedback on such technologies. Otherwise, we will continue to pump CO2 to the air for generation of cuddly little cats eating orcs from Lord of the Rings
Blog post about different ways to generate fractals. It’s pretty interesting and easy to follow. There is also a part where author describes why Buddhabrot is so blurry and has many artifacts. Apparently, it’s because of lack of precision while computing.
Elderly Swiss women sued their government over not putting enough effort to stop climate change, and they won! The court decided that current efforts violated the human rights of "health, well-being and quality of life". Personally, I hope this case will open more doors to sue other governments and private companies. We cannot allow for the current world to be ruled by people driven by greed, instant gratification and lobby of billionaires. With the right pushback, I think we can change that. #positive #humanrights
The blog post describes different ways to include LLMs in products. I really like the parts where the author compared different views on the LLMs. He specifically mentioned people who treat LLMs like magic or reinforcement learning. The ones who actually read more about LLMs know that they are not silver bullet and should be used cautiously. There is also focus on HITL (human in the loop) where the author describes why it’s important for legal liability. Many people think that HITL won’t be needed in the future. But we don’t really know what will be the end result of LLMs race. Currently, OpenAI, Microsoft and Google are the best to bet on, but things evolve quickly. Especially when we look at image diffusion models.
From time to time, I check how people live “Off grid” lives. The romantic idea behind it always hooks me up for a couple of hours and I start checking funda website for some houses in the woods. I know that living #offgrid is just romantic fantasy. Just as not having to earn money for living, but it’s good to dream from time to time. When it comes to dreaming, the idea of cold shower reality validation is always tempting for me. I love when my dreams are crushed by reality. This actually makes me want to live even more. This video is a great summarization of off grid dream that it’s super hard to fulfill. I loved every minute of it. The best part is when this girl debunks the "self-sufficiency" of the people who claim to build amazing off grid houses by themselves.
Interesting article about data storage systems. I also think that the data we write now and want it to be readable in the future should be written in a format which is simple and vendor-agnostic. "If you want your writing to still be readable on a computer from the 2060s or 2160s, it’s important that your notes can be read on a computer from the 1960s."
Local #LLM runner which is offline and #opensource I tried it for a week, and it seems to be a typical LLM runner. Nothing fancy but! It is open source and does not have a shitty license like LLM studio.
Great blog post on how to use Git as a debugger tool. I really like how the author showed how to use internal #git expressions for files finding. There was also a huge chapter on git bisect which actually got me thinking. What if we use this automatic git bisect with large language models. I believe LLMs can write a simple test for some code that will return one of codes for GOOD and BAD commit. Then Git can automatically run and find broken code. I think it’s not a revolutionary idea, but something which could really help debugging. Maybe it’s good for some #hackathon ;) #idea
Great article on how "async-task" works in Rust. I’m not an expert on Rust, but I guess I don’t have to be. The explanation made by the author is actually applicable to many languages which also offer "async" data processing.
My fiancé started to grow some #plants and we both started to research what are the best ways to keep our plants healthy. #gardening In this post, I’ll make a note for myself on what to do and what not to do while fertilizing plants. Plants can synthesize sugars, fats, and proteins but can’t make mineral nutrients. Primarily Nitrogen (N), phosphorus (P) and potassium (K). Store bought #fertilizers have labels like 10-10-10, which means N-P-K. #npk Nitrogen is essential for biosynthesis of proteins and is a central component of chlorophyll. Add more of it, and you will achieve rapid growth and foliage development. Phosphorus is needed for photosynthesis and energy transfer. Use for root development and flower, fruit and seed formation. Potassium required for regulation of response to light. Ok, this sounds as if bullshit, but the article does not provide sources for it. Probably it’s much more complex, just as B12 in humans.
Have you ever tried to draw a living specie? I tried cats and when they don’t move it’s actually simple. When it comes to gorillas, it’s apparently more tricky! You need to behave and give some impression of what you are going to do. Interesting blog post and great gorilla portrait!
Someone used Discrete Cosine Transform to apply compression on text. What is fascinating about it is that compressed text sometimes is not readable by humans at all, and LLM can pretty well parse it to original text! Anyway, I found this person’s blog and started to read ever blog post he/she/they made.
Elementary blog post which introduces core concepts of radio antennas and modulation. I never heard about this comparison of capacitor and antenna. Also, the way author describes it seems like a breeze.
It looks like people continue to try creating common dataset for training of LLM that is public, do not rely on common crawl and respect copyright. Whenever I see such a project I get curious what’s the actual source of the data, who validated it and is it really so transparent? I suggest checking it yourself as I don’t really have time to dig into it :\textbar Still, I think this again shows that OpenAI, Microsoft, Midjourney, and other GenAI companies simply work on stolen content and should pay giant fines.
The author describes his experience with Meshtastic. If you would like to buy yourself this dooms day communication device, I think this blog pretty well describes how lonely and hard the whole journey is ;)
Small blog post about analysis on newly published crypto wallet applications. Most of them are available in snap store and from what author noticed they do only one thing, and they don’t do it well. When you provide your data, the application is sending your wallet ID and password via plain HTTP to the attacker.
Science time! Simple research on agricultural crops detection. Basically, researchers took some Sentinel-2 images and trained different algorithms like convolutional U-Net, decision trees and logical regression to detect where are the lavender fields. I remember using these methods for my Master Thesis as again it’s super surprising that random forest algorithm can be as good as convolutional neural network.
Really interesting discussion about space for alternative medicine in science. tldl; there is a space for checking placebo effects and learning from quackery doctors on how to care for a patient
I haven’t read much this week due to internal work at Hackaton and laziness after it
The hackathon went really well. Our team did not win anything, but I think we did the most creative project in the whole company.
The project was about visualizing different data sources within map polygons generated from a voronoi diagram
I cannot share screenshots of it as it was internal work, but believe me. It looked really good and was super functional
I know it’s not much, but
I would like to thank Monika, Guilherme, and Pedro for staying up late and working on our hackathon project. We really did something great, and I highly appreciate your work
Science time! Interesting offensive security paper which describes an attack on SOTA LLMs services. tldr; they capture packets that you receive which interacting with ChatGPT and then use LLM to predict what it returned. Because almost all LLMs use tokens to transmit words in real time, this simple technique works rather well when applied to commercial LLMs solutions. When it comes to numbers, it looks like : "Using these methods, we were able to accurately recon-struct 29% of an AI assistant’s responses and successfully infer the topic from 55% of them." 29% is not much, but still, it is a lot when you consider that this attack can work for any conversation that you sniff from the network of victim.
Amazing blog post about different optimizations you can make in PHP to run your code faster. Now, I get that not everyone is a fan of PHP. This post is different, though. I think everyone should read it to understand the crucial parts of early optimizations: - IO and disk reads/writes - using references - optimizing conditions - multithreading (yes PHP can run stuff in parallel!) In the end, author managed to move his naive implementation that took more than 20 minutes to 27.7 seconds!
The author describes how to LLM helped him to create fuzzer for his data format. Before reading it I had no idea what a fuzzer is, so I checked it on Wikipedia. “fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program” With this in mind, I was able to read through the blog post and get some knowledge. In the end, author suggest that LLM hallucinate a lot and for custom format it was not a breeze. Still, he’s very promising that in the future it might bet better.
Great podcast about Richard Dawkins and his thoughts about escaping from indoctrination. If you are religious and would like to become an atheist, but you have family or friends that would not accept this, I think this podcast is for you. Also, there is one funny part where Dawkins is reading messages from his haters. It’s a super funny part!
Study time! Some researchers took GitHub repositories which are marked as “educational” and then used ChatGPT to validate if they contain malicious content. From what I understand, they classified README, repo description and other metadata via ChatGPT twice and then compared if those results were actually similar. To check if ChatGPT properly labeled data, they took 100 random repos from 9294 identified by ChatGPT and concluded that their methodology properly detected 85% of the cases. It’s a really short research, and sadly, I don’t see the list of those repositories for further validation.
A whistleblower from Babboe company was tired of raising critical issues about frames and other parts of their bikes so he went to court and sued the company. Apparently, after inspection, it came up that the frames can break and are life threatening. Then the company tried to fire him and made court case to do it due to labor protective laws in the Netherlands. Apparently the company recorded the conversation with him where they tried to make him say things he will regret. In the end company lost the lawsuit and judge said they are harmful to the employee. What’s the conclusion from it? The conclusion is simple. If you work for a company that builds bikes which carry children, don’t be afraid to speak up. Your dignity and your rights are more valuable than the new Mercedes of your boss. Never be afraid to rise issues and when no one hears it go higher, especially if this will save lives. titanic_music_background_crack.mp3 playing in the background.
Small blog post about Scalable CSS and how you can characterize well-written CSS. The conclusion on how to write CSS is actually quite simple. Don’t use many tools, keep it short and do not overwrite stuff. Also, learn as much as you can and don’t get scared of CSS. It’s actually simple to use and if you ever used any other tool to position elements like Qt in C++, you will love how much more flexible CSS is.
Blog post about error detection algorithms via modular arithmetic. I loved it! This blog post reminded me about the programs we had to make at university with number correction algorithms! The examples shown by the author are focussed on basic math. There are also examples of error detection algorithms usage for plane tickets, barcodes (EAN) and credit cards numbers.
Long essay about silver bullets in software engineering. It focuses on the difficulties of software engineering and possible solutions which for the author do not seem to be a silver bullet but rather a better road to solution. There are some parts I really liked about the essay and I’ll share my thoughts here. "There is no royal road, but there is a road." I really liked this comparison as after almost 10 years of programming I can clearly see that royal roads or silver bullets do not exist. When I started programming, one of such ways was AGILE methodology. I remember exact changes in industry which aligned with it to deliver faster and better code. Sadly, I think, after 10 years of AGILE we can clearly see that it was only a better road. Not a silver bullet. "Hopes for the Silver" in this chapter the author focuses on things like "higher-level languages, OOP, AI, Expert systems" as something which we can put focus on to have a better road for the future. What is interesting the author do not believe that AI can be such a silver bullet and living in 2023 we can see that he might be right. Here are some of my thoughts on programming and future AI development. When I think about programming, I think about creative problem-solving. The actual coding part is the most fun and productive, but it’s not the essence of programming. I mention this because this creative problem-solving has finally started to emerge in the AI world. Because of GAN networks and LLMs, we can finally generate creativity on demand. I think generating images is a clear example of how a complex, infinite task can be split into smaller parts that deliver actual value in the form of image. I think the coding is actually quite similar to painting and writing. The tools are programming languages, but the abstract representation of programmer output is usually quite subjective. The only difference is the deterministic aspect of programming. Our programs cannot behave like current AI-generated content. When I look at human-generated faces they are pretty good, but when you look at it closer the more obvious it becomes that something is not right. Still, there are images which we cannot distinguish from truth. Abstract art, focus on smaller parts of the image already are so good we cannot tell if they were made by AI. The other benefit of such generative AI is that you can generate millions of copies instantly and pick the best one. As soon as people realize that they can do the same with the code, programming will become another obsolete skill that is used only by people who do it for fun. I apologize for such depressive mood, but I actually think this will happen. Same as doctors will be replaced by AI and big companies will rule the world. Ok, that’s it!
Apparently, Yuzu creators need to pay Nintendo 2.4 million dollars for damage. I find it disgusting. Yuzu was a Nintendo Switch emulator and creators of it got sued due to piracy claims. This is insane. People who make emulators are witch hunted, when the real issue are the people who “pirate” Nintendo games. Nintendo oppressed a team of dedicated people working on Open Source runtime engine. I understand if these developers would clearly indicate that they are involved in piracy via, for example, hosting pirated titles. They did not do it, though. Nintendo, sorry, but my switch is permanently switched off. I wonder who will fork this Yuzu. If anyone at all…
Amazing article about motion blur, and its implementation as a shader. Author is describing how to achieve motion blur step by step and finally moves everything to WebGL.
Blog post about Google Gemini image generation fuckup where you could generate black German soldiers and similar incredibly impossible scenarios by default. It looks like it was added as a shortcut solution. Instead of diverse training of data and some innovation, they were adding ’diverse’ word to every prompt that generated humans. The author makes numerous valid points about the USA. Touches statistics of racial prejudices between races. I’m not an expert on this field, so I guess I just need to trust the sources of author. Post is pretty long. What I learned is that the shortcuts like this are made all the time and not only in AI world. Diversity programs of companies are also a shortcut that does not seem to work in the long run.
3 weeks ago massive spam on Fediverse happened, and now other open alternatives try to learn about what they can do to earlier respond to spam. This knowledge is shared from codeberg.org. I think currently Codeberg is one of few code sharing services which actually cares about privacy.
A super well-written, interactive post about LUT (look-up tables) and their usage in shaders. This blog post is so well made, I think it’s not only worth learning how to use LUT but also how to write good blog posts. Amazing stuff!
Pretty old post about PDF attacks. Various techniques are mentioned, but what I found interesting in comparison between PDF reader tools on different platforms. Linux and Mac are pretty safe by default but Windows. Windows is that young brother that always got into trouble.
Simple and quick blog post on how to make vector search in SQL. The author does not get into the math behind it but just shows simple example on how to code the actual solution.
Excellent interview. I was listening to it during jogging which I highly recommend! The interview has some good philosophical experiments on consciousness and forms good questions about AI. I do not agree with solutions proposed by Dan Bennett. Embedding DRM in every device to make sure content created on such computers is not generated by AI is a no-go. Something like this cannot happen as this would be too privacy invasive. The argument in favor of that is created based on what we currently do with money. Fraud of money is almost impossible due to the process we put into controlling it. Well, the money is just a concept we all agreed on and recently due to various cryptocurrencies fraud became easier than ever. I don’t think we can stop the current AI train with some regulations, and for now, all we can do is to keep the research of it as open as possible. This way we encourage scientists to make tools which focus on detection of AI-generated content.
Amazing rant about Tailwind CSS, and its stupid philosophy targeted for developers who don’t know actual CSS. I think it’s worth reading for any front-end engineer that wants to include this abstraction abomination in their project. When I first learned about Tailwind I considered it to be a joke, but people hyped it so much now almost everyone uses it. Well… use it until a point of collapse where they need to go back to "apply" directive and normal CSS classes.
Some person made small experiment based on LLM. Basically, he started to ask LLMs questions about different rewards system. He was mentioning tips, assault, world peace etc. The conclusion was that it does not seem to work, but it’s probably not right as for his daily work he sees the clear difference when he offers reward.
Interesting blog post where author takes 7s video and sent it to Google Gemini 1.5 AI. The video consists of a bookshelf being recorded with books laying around in various positions. The task for AI was to get all the books names and return them in JSON format. From what I see, it managed to do it pretty well. It’s a really exciting way of using LLMs. Detecting videos is probably as author mention one of the killer features we will soon all use.
Looks like 21.02.2024 GPT-3 and GPT-4 went mad and started to return almost totally random text. From what I was reading it was turned off for a couple of minutes, and now it looks like everything went back to normal. Still, I haven’t seen a statement from OpenAI what was the issue. I hope they will have to pay some money back to their customers. Yeeeha!
Apparently, Voyager 1 started to send gibrish data back to earth and scientists try to fix it. Sadly, the odds to do it are tiny. Let’s see how this will all turn out! The blog post also contains really cheerful story from 70’s culture. Wonderful read.
Great summarization of the current NYT lawsuit. It contains two other court cases that were lost because of similar legal issues and one won by Google which was won. My opinion on the lawsuit is clear. Either they should pay everyone who’s work was used or they should release everything for free as researchers do with their open models. If this does not happen, they should be rosted by the judge and fined with billions of dollars.
Fascinating blog post that describes a bit of math behind content creators support. I won’t go into details of it. It’s for sure a good read which mentions burnouts of creators and selling out process. What I would like to mention is an interview I once heard from the vocalist of my favorite Soul/Funk band from Poland, P. Unity. He said that he would rather work in a factory and earn some money to keep his music true without a huge fan base. Just so he can express himself however he likes. He does not need to have a huge amount of money from art he makes, and it would be nice for sure, but it’s not his goal. I think it’s beautify said. I don’t write my blog for money, even though I could literally force myself to write on the Medium and get some passive income. Sadly, that would mean I could not swear in posts and write about various topics I love like art, programming, cooking, science or atheism. That’s why I keep stuff I love outside of money. If you start putting money in it, it looses its fun and build pressure. I have enough pressure at work to put it also into stuff I love.
Blog post form author who I mentioned some time ago. He basically writes about tool that allow you to create simple web pages in simple ways. I strongly agree with him. We should build more tools like this. I started my personal project "stativa" some time ago, and I hope to release it soon. It allows generating galleries from videos and images. But not simple galleries. Some creative crazy ones!
2023 annual Rust survey. Important notes: - There are quite some LGBT communities behind Rust, and I think it’s worth noticing this. The community is really diverse. - Last part of the blog shows what are the biggest worries of Rust community. Some of them are: Rust will be too complex, not enough usage in the industry but also "I’m not worried" Good read!
Looks like maintainer of HexChat did the final release. I’m too young to actually remember IRC. I know that people still use it, but because of other communication tools which are simple to connect to, it’s not so popular anymore. I tried HexChat a couple of years ago and the number of plugins and customizations you could make to it was just insane. It was a great project. Maybe some community will for it. Will see.
Meta aka Facebook made some research on LLMs and unit tests generation. It’s a pretty good paper, but I also found some issues with the numbers. I’m not a scientist, but there is something fishy about the numbers and claims of the paper. It’s written that a "clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination". When you read what those filters are, you find out that there are tree different filters. First, one checks if the code builds. The second test checks if generated code passes assertion and runs the suite 5 times to remove flaky tests. Third test check if the generated test improves coverage percentage. With all of this in mind, I would like to ask how these filters remove hallucinated code? From what I read, it only reduces the changes of hallucinated code going for code review. This is actually also later written in the summarization of numbers. Meta evaluated these LLMs tools on different products and the results are generally good. First for 42 tests, 4 were rejected and 2 were withdrawn. Reasons for that were that tests were generated for trivial method, had multiple responsibilities or failed to include test case. So, the third one is hallucination? I would say so. Next there is another study on 280 diffs. 64 were rejected, 61 had no review and 11 were withdrawn, but it’s not mentioned why 64 were rejected. If it’s same as for 42 I expect it was due to hallucination which Meta LLMs do not remove but limit to some extent. Ok, no more criticism! I actually like when companies publish studies like this. It’s a good and healthy way to commit to the global scientific community. Good work Meta!
Someone created a clickable bookshelf with SVG polygons, Grounding DINO, Segment Anything Model(SAM) and GPT-4. I think this part where GPT-4 was involved was not necessary. It was used as an OCR API and there are already algorithms and models which allow making OCR really reliably. It’s still an amazing project. Good stuff!
“The Decline of Usability: Revisited In Which We Once More Delve into the World of User Interface Design.,” n.d. https://www.datagubbe.se/usab2/
Interesting rant about usability issues in current UI. It’s a good read. I learned from it about: "Skeuomorph" - it’s a derivative object that retains ornamental design cues from structures that were necessary in the original (Wikipedia). "Fitt’s law" - a predictive model of human movement used in human-computer interaction and ergonomics. In the post, authors compares some UIs and rants how bad they are now. What I would like to do now is to argue about current changes and its direction. The first rant is about “colorful icons”. I disagree with the statements he makes. Colorful icons are for sure a good usability fit. Sadly, they break immersion within the app. Many colors all around are difficult to combine within some brand app. Let’s imagine Spotify. You cannot put the "play" icon green and "stop" icon red within the Spotify UI, just because it would look like bad. I think nowadays there is a blurred line between usability and design that sometimes is crossed by designers to make something that looks good. It’s not the best way, but it’s a tradeoff they need to do. The second rant is about how good old UIs were. To back up this claim, author shows some old IRC client and mentions beauty and usability features of colorful icons of buttons. Next he mentions that nowadays "Slack" has these blend, dull icons which are not distinguishable. The design of them is also ambiguous. It’s not clear what those buttons do. Well, I looked at your IRC app and I can tell you that I have no clue what your buttons are doing either. I think the author here do not fully grasp the idea behind icon buttons that evolved from old times. New icons should have tooltips or labels and when the screen gets smaller only icon persist. I, personally, think this is the best compromise between usability and design freedom. Your users will find a way to use your program. Just give them a bit of time, and they will click thought app and remember the steps to achieve expected results. I’ll finish this quick note with a claim made by the author followed by my comment. "All the while I’m thinking: If modern application design is so great, why does everyone feel the need to change it all the time? " - and my answer to it is: Because we are fucking grumpy apes that will always complain. The older we get, the more grumpy we are and the more sick of changes we get. Grab a glass of whiskey or some good quality orange juice and enjoy the ride!
OpenAI released their SOTA text to image generation and it’s scary AF. First of all, as always, there is no paper for it. Second of all, there is no mention of sustainability. Lastly, there is no mention of training data source. Again, even without all of these I’m really impressed by the results. A couple more real papers and we will have some extremely good results.
Massive rant about CMake build system and I fully agree with the author. Maybe I’ll provide some background to that. During my university times, I had to program a lot in C. Understood virtual functions, templates, pointers but! What I never understood is the build system of it. It’s insanely convoluted. CMake projects I made were usually once setup, and then I was reusing the same template for any other project I did. I still use this template just to not go back to CMake documentation. It’s too complicated and difficult to read that I never found motivation to actually learn it. There is also no point in learning it. Learning build tools should be only necessary for big projects. For CMake, you need to know almost everything from the start. I love Cbut the build tools for it are just a nightmare. I haven’t checked if something changed recently, but author suggest using mason or bazel. Maybe I should try it. Or better, learn Rust.
Reading about backdoor attacks on machine learning models is something I can’t stop doing. What I find interesting about it is that there are so many vectors of attack. You can poison test dataset, training dataset or even try to break working model without any poisoning. This attack presents a technique which allows to poison training data of speech recognition system. Such poisoned training data can lead to a model that in common usage works normally, but as soon as an attacker creates the own query it will behave differently. This particular attack is only for text recognition system, but such attacks can happen for any diffusion model or LLMs. Fascinating read!
It looks like mRNA vaccines for cancer have started their trials on patients. This is great news! It might be possible that these new methods will have a higher chance of curing cancer and will be less invasive for the human body. I also would like to notice that these advancements were made by a huge number of dedicated people who decided to study instead of praying.
Art project/operating system and programming language for visual programming with printed cards. Sounds insane and amazing? Well, from what I see, it is! The example with the button is simply mind-blowing! It’s all based on AprilTag, which apparently is a QR code for robotics. From what I read, parsing and generating such tags is super fast. For sure, something worth reading.
“The World’s Most Responsible AI Model - (HAHAHAHAHHAHAHAHAHAHAHAHAHAHAH),” n.d. https://www.goody2.ai/
My RSS got me this piece of marketing. They apparently made AI which is ’GOODY-2 is a new AI model built with next-gen adherence to our industry-leading ethical principles. It’s so safe, it won’t answer anything that could be possibly be construed as controversial or problematic.’. So I scrolled their website and looked for the actual model or paper with some validation. There is none. Now, after playing around with it, I’m not sure if it’s a meme or actual product. Probably it’s a meme and I got the bait.
Great research on the temperature of LLMs and benchmark scores. What I find spooky is how good GPT models are in comparison to LLAMA models. It’s almost suspicious that they used these validation datasets for training. If this would happen now researchers could check this up?
A tool that allows you to version PostgreSQL databases. I haven’t tried it but based on their positive documentation language assume it might be cool to try. I’ll post it here, so maybe in the future when I come back to working with Postgres this could be useful.
At first, I was really skeptical about this blog post due to the clickbait title, but in the end, after reading it I think it’s a great source of knowledge for any startup architect. Some personal notes: - monthly cost tracking meetings - I really like this idea and I think more companies should at least record them and summarize the output from such meeting in some LLMs. It’s good to have an overview of how much it’s spent on infrastructure and update it with some intervals. - multiple applications sharing a database - this is something I noticed in many commercial projects. It feels like this mistake is not avoidable. When a company quickly grows, there is not enough time to create a proper DB architecture - not using a network mesh (as no regrets) - I fully agree with author’s opinion on this. Network meshes are fantastic but the complexity they involve quickly gets insane to maintain. What I would suggest is to start deploying to k8s and keep track of your microservices. If there ever is a need to use service mesh keep the door open but do not start with it. And that’s it! There are more things in it, but these are the ones I found the most interesting. What are your opinions on that? I don’t use any comment section on my blog, but please don’t hesitate to PM me on my mastodon!
Whenever I hear that LLMs are safe to use, stuff like this pops up. It´s a tool that allows you to inject invisible text that probably won´t be visible in your application into the LLM. You can use it to prompt inject nasty stuff, but maybe you can also use it to actually mess up the training of LLMs. Just add at the end of your blog post 1000 lines of hidden text in various techniques and see how OpenAI in 2 years speaks ASCII Chinese.
Apparently, large models still cannot count items in the picture. This research proves that some of SOTA models fail after generating and recognizing numbers of items greater than 5. Actually, this is pretty easy to estimate. Most of the people label images up to some number. I can’t expect people to label 12 people on the screen. For now there is no counting objects mechanism that I heard of, and it’s all done on training data that probably do not have infinite number of labels (which is by the way not possible).
A really interesting paper about human language structures. What is innovative in this paper is the introduction of "Synapper". This synapper is like a graph which connects words in a way that allows to detect ambigious ones. It’s all created by cycling through different word orders within different sentence classifications like: declarative, interrogative, imperative and exclamatory. Different languages have different order. English for example rely on SVO (subject-verb-object). In Japanese it’s SVO. It’s important to know that human language is hard due to the difference between syntax and semantics. One sentence can have different syntax but the semantics are the same so we would understand the sentence the same way. From what I see, this article paper is not reviewed yet and clearly needs to be. Some of the claims it has need to be fact checked. Maybe it’s possible to give examples which disprove the created graph. This paper also has some good comparisons of how LLMs generate tokens and how it’s different from humans. One is based on probability of the next token and other decode the data and puts it in abstract meaning (whatever this means for the author). Also authors give example of some person who did not manage to learn language and still was intelligent. She could express her thoughts just not by using language. In general it was a long read and I’m tired. I highly recommend reading it though. Maybe with more papers like this we could improve LLMs and make them intelligent.
Small blog post on how to draw paintings that have water on it. I really like how the author described the physics of light and how it all connects to the color of water you draw. Basically when you draw things in water you should invert them quite a bit and make the colors a bit dimmer.
Finish police says they managed to track Monero transaction and find the hacker who attacked psychotherapy clinics. First of all good work! Fuck this guy and any hacker who steal from public services. Second of all, I don’t think it was done via Monero transaction. Probably, he made some mistakes when he changed his strategy from demanding money from clinic to demanding money from clients. Spooky stuff though. Maybe we should all look for some more anonymous coin than Monero?
I love blog posts like this! Author had issue with his remote control to some Dyson fan. The problem was that it was draining the battery too quickly. After dismantling the remote control, he found that it was a broken capacitor between the battery. In the end he removed it completely as he had no spare parts. What is also interesting this whole remote was not made to be opened so after he opened it the case got broken. Now he needs to have his superior DYI case that I find insanely creative! I think more companies should not seal their devices with glue to prevent repairs. It’s such a shame that whenever something is broken we need to replace the whole device…
The author summarizes challenges that developers have while working on LLMs integrations. The most important part for me was the one about testing. The author mentions that LLMs tests are ’flaky’ and it’s hard to guarantee that a new version of model will actually secure previous results. Well, this topic is actually work reading about. How to test LLMs when their output is simply not deterministic? Is there even a way to test it? Is it worth to test it? Author mentions that some LLMs integration developers create large benchmarks that are used to measure how prompts. Sadly, some of those fail when new versions of models are introduced. Interesting world we live in.
An interesting research paper focused on ways to detect artificial social media profiles. The authors estimated that 88537–17864 users on Twitter are artificially generated. From other interesting stuff in this paper, researchers characterised forms of activities that were characterised by multiple bots. These activities are: impersonation, scanning, scanning, coordinated amplification, automation, and verification. Lastly, the authors present an interesting way of detecting if an image was created by AI. They measure GANEyeDistance metric, which, from what I understand, is related to the space between eyes.
Great rant about different services that require you to make an account. The reasoning provided by these companies is super silly. The worst for me is Philips Hue which now requires you to make an account to control your Philips lightbulb.
Blog post on how Figma designed their vector network system for drawing. The blog post is extremely detailed, and it starts with some basic information on the "pen" tool in other vector graphic software. Then the author explains the principles behind vector network and the math behind it. To be fair, I read through it and did not understand everything. I’ll have to go back to this post as the representation of shapes with a network system can be applied to various problems.
I found this project on Mastodon, and I fell in love with it. Not only was it recommended by the developer with these words, "u can buy my gay mp3 player if you want, I think its a pretty cool device" but it also looks like the iPod I never had money for. Still, the price of it is so high that I can’t afford it now. I wish the development team of this project all the best! Cool stuff!
Amazing blog post on how to generate PDF files by hand. It explains PDF structure, readers limitations, and, in the end, makes a red square with the size of Germany. It’s suuuper worth to read it.
Interesting blog post about ways to store and manage large amounts of data. It shares definitions of data lakes and data warehouses. It also touches on the topic of data formats that can be used to store large quantities of data.
AI-driven clock that cannot tell the time right even though it’s connected to the fucking Internet. 21st century, and I feel like we went back to caves.
Web 3 extension that redirects common proprietary services to a more privacy-friendly alternative. I haven’t tried it yet, but even without installing it, it’s worth checking out the alternatives grouped in one big list.
Some people decided to store movies and books on NPM and Github. Normally, I would be disgusted by this fact, but this time I just got upset. They simply changed the extension of files from ebooks and movies to ".ts". Guuurl, if you ever do such a thing, please at least make it right… 1. Encrypt your files with, for example, gpg. 2. Change your extension to file-bundle.gz and put a password on it. Now no one internally on Github can see your code or movies. Let their AI decrypt it and wonder what’s there, not just change the file extension. PLEASE BE A PROFESSIONAL SCRIPT KIDO
Someone Did analysis on sound distortion of Steamboat Willie soundtrack. Really interesting idea. You can learn a bit about smoothing signals and FFT. Really cool stuff!
The authors suggest that if you want to know how light reflects on an object that you paint, sometimes it’s good to make a maquette and put real light on it. I would say to model it in 3D and put light on it with Blender. This will save you a crazy amount of time, and you will be able to test the light with different colours. Still, if you don’t know Blender and can sculpt quickly, do your thing. I’m not your mother to tell you what to do. Just don’t forget to have fun with it!
Apparently Rust packages had debugging symbols enabled by default while doing releases. This person decided to commit and remove them from Cargo. It’s a really interesting post on how sometimes OpenSource does not work properly, even when everyone agrees that something should be fixed. No one is taking responsibility to do it till this one person comes, and after 7 years, it finally merged with master.
Interesting paper about reconstruction of shredded banknotes with machine learning. It’s actually quite simple to do. Sadly, China’s monetary authority breaks the law and puts stones inside cylinders with shredded money XDD No, but for real. It’s not possible due to a serial number mismatch. The probability of having one banknote with a valid serial number is probably extremely low. If not, people would just sit and connect shredded banknotes.
Amazing blog posts about different phases of platform adoption. I really like how well-written this article is. The first author defines the difference between Platform Delivery team and Product Delivery teams. The main one is that Product delivery team builds products for end users of a company, while the platform team builds products for other teams inside the company. Later, he explains the different phases of platorm engineering. Migration, consumption, and evolution. What I got from this article the most is the OpenSource paradigm from inside the company. I think this makes huge sense if you want to not only build quality tools but also create an internal community of good developers.
“The Open Source Sustainability Crisis,” n.d.
Interesting post about various long-term issues facing open source developers. It touches on funding issues as well as burnout among developers. There is also a part of the unfair treatment of big companies that use libraries without paying a single dolar. I think this is a very interesting problem that actually has a good solution. Use licences proper to your expectations. If you don’t want companies to leech on you, Use licenses that restrict money gained by them or amount of people that use tools based on your libs. If you are a hardcore GNU person, Use GNU licences and make use of "no warranty of work." If you feel like you are not paid, change some things in the code to add a huge banner: "PAY ME MONEY IF YOU USE IT COMMERCIALY." There are many cases where developers did this and got backlashed by companies or single developers who do not understand that OpenSource maintainers have the right to do whatever they want. And I think this is actually the beauty of OpenSource.
Interesting article, which shows the author adding captions to around 80, 2 hour-long videos via the Gladia AI service. He also explains how to upload captions to YouTube, as apparently it’s not so trivial.
The author shares his opinion on the current status of Indie Web tooling. He suggests that we should build more accessible tooling to generate websites that do not require knowledge of programming. I fully agree. We need something better than Hugo or WordPress. Something that allows you to get the website in.zip and upload it to an FTP server. We should open the web to everyone, not just developers.
Someone made an application to offensively protect artists rights, and I think it’s beautiful. Basically, artists put their image through this application, and the result is the same image for the human eye but another image for AI during training. I really like such offensive ways, especially that copyright opt-outs are not taken seriously by the industry. We should all make a repository of such images, host it somewhere, and mark it as SOTA so companies that benefit from free artist work can make AIs that cannot generate anything.
The author discusses the lack of design phase in today’s IT world. He presents an example of an application that could not follow competitors due to its bad design. I agree that design is super important, but personally, I don’t believe that you can predict everything, and it’s worth knowing the software cons from the beginning. This way, you don’t make future promises that cannot be fulfilled. I also like how the author compared bad design failover as an issue that is spread over time. You don’t have to worry about it immediately. It’s the same with good software design. Usually, you don’t benefit from it immediately. Especially since it takes time to write good software.
Some people connected the ESP8266 to their PC via Telnet, and I think it’s beautiful. This whole project reminds me of stuff I did with ESP8266 during my university years. HTTP servers, modem simulators with bash, and AT commands Good old times. All these sleepless nights \textless3
This one has a story behind it. Imagine -6 degrees Celsius outside. 7:00 am and me cycling through the frozen cycling paths of the Netherlands. I felt cold until I played this episode of Dawkins interviews. Before listening to it, I was not aware that such a person as Wendy Wright existed. Right after she said that there is no evidence in favour of evolution, I felt warming anger. The anger allowed me to cycle straight to the office without feeling cold. The woman he interviews is in denial, which I haven’t seen in quite some time. And trust me, I watch a lot of controversial topics. She not only does not take evolution as a fact but also wants to teach kids that the current evidence in favour of it does not exist. Literally, she said that we should teach kids controversial ’teories’ so they can pick their truth. What a fucking stupid claim! You should teach kids critical thinking and not feed them with religious opinions in opposition to scientific facts. And by saying that, it’s equally probable that evolution is the truth and humans come from Adam and Eve, is extremely dangerous.
Blog posts discuss ways to introduce image changes when the container ratio is changed. Currently, it’s not possible. Picture tags can change images to different ratios when viewport width changes, but not the width of the container the image is in. Now there is a proposal that should allow that. What is also interesting is that you can change the image to a different one when the resolution changes, but the alt text stays the same.
Blog posts include conversations between two developers that argue about the C and Rust APIs in the Linux kernel. It was an interesting read, even though I did not fully understand why there is a need for file structure to be passed. What is super interesting, though, is how the Linux kernel community reacts to ideas that do not comply with backward compatibility. Here, the person arguing said, "Then we shouldn’t merge any of this or even send it out for review again until there is at least one non-toy filesystems implemented.". Spicy right? I love how Linux kernel developers care about the quality of their code and not about others feelings. In the end, Rust developers implemented other APIs, and it looks like everything was agreed to be merged. This is a really interesting story to read if you would like to commit Rust to the Linux ker
Interesting blog post about the setup of Kubernetes clusters on cheap mini PCs connected via USB4 cables. The author compares different solutions, starting from cheap rack servers to mini PCs. Next, he suggests using different ways to connect nodes and ends up setting it up with USB4 and NixOS. It was a really interesting read. I liked how the author was really hyped over the whole project
Talk between Dawkins and George Coyne, where Dawkings mostly ask Father George how he can accept evolution as a Catholic church believer. The whole talk is really interesting, but also quite shallow. It shows that Father George does not really understand that he cannot pick only good things from his religion and reject everything else that does not fit his beliefs. So, for example, he says that the Catholic Church is divided and there are many opinions in it. Some of those opinions do not follow the words of the Pope or Bible. This is simply wrong. If you don’t accept the Pope’s words as the only source of truth, you are not a Catholic believer. I know that it’s hard to accept. I learned about it like 4 years ago, but it is like that. Other religions have special ways to deal with people who add or change words in holy books.
A short review of a book about typography. I really liked the comparison of right typography usage to poetry. "Typographic design should contain qualities of rhythm and proportion, resembling music or poetry."
Blog post about what one programmer learned during 30 years of programming. I really liked the ideas of starting with the persistence of the database, never skipping release chain steps, and tackling the hard parts first. There is one thing about which I disagree a bit. It’s about splitting components into different repositories. The author suggests that clear dependencies of the system should be in the same repository, and I personally think everything that can be identified as a standalone entity, like the UI library and APIs, should be in different repositories. This way, it’s harder to couple up dependencies.
Deep fake celebrity scams are flooding YouTube advertising. Google does not care much and continues to collect revenue from it. It’s interesting to see how other people tracked companies that create these deep fakes, but for now they are continuing to produce this bullshit. I wrote about it some time ago on Mastodon, where I mentioned that YouTube does not care about it as their whole profit comes from advertisements. Why would they fix it if this generates revenue? It’s USA capitalism, baby. Milk it till it bleeds. Yeeeehaaaa! (Pathetic)
Interesting blog post about artists who switched from proprietary software to free software. I like that they not only show the reasons behind it but also the experiences they went through. Blog posts also have some quotes from interviews. I think especially the last one is worth checking up on. It’s about how artists can shape the software with small contributions and the feeling associated with it.
We pay huge amount of money for groceries in Netherlands. I was looking for a reason for that since inflation went down, same electricity prices. Now I finally have a reason why they are snoop dogg high. "Energy prices are now lower, but supermarkets and suppliers are tied to contracts. So those costs will remain high for the time being." Time being seems to not be defined. #eattherich
OpenAI responds to NYT lawsuit. They mention that their work supports journalism and its not a big deal that their model memorise text. Well I think they should ask ChatGPT if the memorisation is not a critical issue which should prevent them from taking money and selling GPT as a tool.
“‘Impossible’ to Create AI Tools like ChatGPT without Copyrighted Material, OpenAI Says,” n.d.
Then pay for each text you gathered on your servers. This is getting ridiculous…
A bit more information about the NYT court case. Experts review the NYT court document and suggest possible end results. \textless/br \textgreater tldr; case is strong; they can win, but probably models will stay as they are. All depends how lawyers will show the case to jury. "A lot of this is about persuading the courts of your vision of what generative AI looks like."
GPT models seem to "memorize" training data, which is proven by not only the NYT court case but also BAIR. For me, this is not shocking news. I have mentioned this many times already. Companies like OpenAI and Microsoft lie to us when it comes to the legal usage of web-scraped data. Current LLM memories are a lot, and because training sets are no longer shared with other researchers, it’s getting harder and harder to track what those companies are doing.
Duolingo is firing people due to AI enhancements. Some of their current courses are insanely bad. The Dutch one seems to be purely generated by AI and that was one of the reasons I had to change the app to learn this language. Verdome
Blog post about the history of decompilation research. If you are interested in how to get C code out of compiled code, I think this is a good introduction. The author identifies three core pillars of compilation. "CFG recovery (through disassembling and lifting), Variable recovery (including type inferencing, Control flow structuring" and describes how the basic decompiler recognises these patterns.
Some university students were asked to use AI for various tasks during the semester. Tasks differ from text and image generation to asking GPT4 for grades for their work. In general, people were struggling to use it. What I find really interesting is that they were writing essays, and everyone was surprised that GPT does not accurately print citations. I mean, wasn’t there a professor who actually described how this AI works? XD Nevertheless, the interesting topic and post is worth reading.
Interesting blog post about the website comments section, which is handled via an SSH connection to the server where the site is hosted. When you think about it, it’s super complicated, and probably most users of WWW would not be able to comment with such technology behind it. Which is true. But I like how the author mentioned that this additional step might actually filter out low-quality comments and decrease current spam messages created by automated software that is focused on platforms like Disqus.
The author compared the length of lambda expressions in different programming languages. Later, he showcases why Clambdas are so long. The blog concludes with a funny quote. "But that’s a huge digression from the main point of this post, which is quite simply: Chas really, really long lambdas."
Great article on how to train your eyes and hand movement for better proportion drawing. Examples author show are different and varies from putting dots between dots, lines between lines and shapes drawing exercises. I seen similar techniques used by really successful artists so would recommend anyone who starts their way in the art world.
Interesting blog about usage of A* algorithm in small 2D video games. Very pleasant introduction to the problem. Author also provides good visualisation of problems he encountered.
Interesting blog post about movement representation in art. I really liked how author managed to put so many details and examples in one post. In last part hr also show different types of art movement like impressionism, rococo, surrealism.
Bisqwit managed to get Bachelor degree! The guy is insanely smart and professional. I remember he was a huge inspiration for me to become a try hard Linux nerd. If you like to write in C while looking at screen of weird editor this channel is for you.
This is a great summary of the current state of LLM. The first part explains why NYT decided to sue OpenAI and Microsoft. From interesting parts, OpenAI is no longer a no-profit organization and does not publish their research while Microsoft is funding it. The second part is even more interesting. It shows examples where GPT from OpenAI produced almost 1 to 1 quotes from pay-walled articles in NYT. Next, there are examples of summarization and paragraph injection, which allow you to access NYT articles. I really think Microsoft and OpenAI do not look good there. What a time to be alive!
Interesting blog about LLM. It clearly explains what we actually learned about LLM this year. I don’t fully agree with all claims in it, like the one that the best use case of LLM is to generate code with it. I personally think the best use case is to write fiction stories and summaries. There is also an interesting part about the ethical usage of AI. The author found an interesting link to a document written by the New York Times that sued OpenAI and Microsoft. It is definitely a must-read.
Great post about math, reasoning and implementation of LLM in … SQL. This one is really good. Step by step shows how to build LLM from scratch. I think I’ll have to read it again to grasp everything better but after first time I already understood enough to share it as a good read.
Super interesting post about hands and arms drawing and their proportion. Author suggest what are the best ratios for them. Its also mentioned what golden ratio is and how to draw fingers with proper angles between them.
Richard Dawkins makes a lecture in front of kids on British national television. This is an audio feed version, but it’s still beautify recorded. I really liked the part where he described how our brain is building the world we see. His example with illusions in easy way show how our brain can be tricked to imagine things that do not exist. I also liked the last part where he taught kids about the importance of validation of their beliefs. The way he speaks about religion is something every person should hear at least once in their lifetime.
Great blog post on different types of perspectives. It covers 1,2,3 point perspective, define vanishing points and aerial perspective. I think its worth reading for everyone who wants to draw better.
First part of the episode is focused on current Israeli-Palestinian conflict and its not well explained. Both of them lack knowledge about history which makes them move between their assumptions of how situation really look like. It’s sad as if you read short summary of whole conflict its clear that both Israeli and Palestinian Hamas are terrorist. Second part is better though. I really like the discussion about free speech in science. This was really well prepared. Last part was concentrated on genetic differences between "races". It was also pretty good. My personal opinion about free speech and diversity in academia is a bit different than theirs. I think we should give people with less wealth possibility to study. Most of those people are coming from quite diverse spectrum of origin. Different skin color, genders and sex. There should be a way to allow them join academia even if their first years of life did not allow their families to gather wealth for their precious education. But I’m not a scientist. For sure we should not look for answer in their lower IQ in genes but focus on society. Even though I don’t think genes are completely not relevant. Still society and its inequality plays a huge role. I think both of them missed this point which I’m quite sad about. Keeping universities only for rich white kids is something we should all avoid even if their education is simply greater. Universities shape the future and if there is no diversity in it a lot of problems won’t be even addressed. There are many examples of that, starting on AI that does not recognise black people as humans to research on sociological topics which omit diverse parts of population.
Interesting article about flexibility of programming when it comes to ideas expression. I found two quotes predicting the future of programming: "With time‑sharing, large heuristic programs will be developed and modified by several programmers, each testing them on different examples from different consoles and inserting advice independently. The program will grow in effectiveness, but no one of the programmers will understand it all. (Of course, this won’t always be successful‑the interactions might make it get worse, and no one might be able to fix it again!) Now we see the real trouble with statements like "it only does what its programmer told it to do." There isn’t any one programmer." And "Computer programs are good, they say, for particular purposes, but they aren’t flexible. Neither is a violin, or a typewriter, until you learn how to use it."
Someone created 4 billion if statements to solve isOdd problem. As something inspired by tiktok it quickly evolved to something more interesting where programmer had to write assembly code, 40Gb and then load it to virtual memory. Quite interesting topic.
Notes on how to write proofs in Lean programing language. I haven’t heard about it but it seems like an interesting project. It’s a functional programing language which allows you to write theorems and proofs.
Research on AI generated content on Facebook. Interesting points on how they are generated. Most of the ideas are stolen from other artists and then altered with AI. For now it’s still possible to detect if the images were altered but in 5 years it could be not possible to distinguish from real artists.
Author braging on why he does not like traveling. I think most of his points are not valid and shows rather closed minded approach to visiting new places.
I haven’t write for quite some time… A lot of things are on my head right now…
I still haven’t finished appartment renovation and finally decided to workout. Other than that I work as Full Stack Developer for ING. I wish to have more motivation to make some proprer programming. Sadly due to overwhelming situation at home it’s hard to be productive
Last week I started a rewrite of my Stativa project. It will be moved to fully web based solution. Hope to finish it before new year
From positive notes I listen to an audiobook about Stoicism philosophy. It’s pretty good and it made my mind less focussed on current advancements in AI
I also learned Dutch to a point where it’s possible for me to communicate during drinks with my friend in Weesp
Maybe I should start writing about my general thoughts and not only focus on IT. There are for sure some things I would like to write about like atheism, pacifism and social media privacy issues
Time will show
Lastly I bought OReilly subscription and try to read couple of pages a week. It’s really good service. Sadly it comes with a price that if I would not be a programmer probably would not be affordable
Ladies and gentlemen, I think we have first confirmed death which was caused by LLM
Some Belgian person took his own life after not being able to find help from AI that he treated as his therapist
I guess the introduction of chatbots goes faster than education that these AIs are not your friends, therapists, or someone you should fall in love with
Pine64 made another newsletter with information about their products and I just love it!
I wish more companies did such updates. Clear and concise newsletters that are easy to read on any device
Pine64, Duckduckgo, Leaf Shave are one of the few companies that newsletters I read fully and the key features between them are the same
I recently started to think about current sources of data that are given to AI and the claim that
"AI like GPT is not producing samples of data it was learning on but instead creates new content
based on context"
So let’s start the experiment. Some person A scraps websites like IMDb for movie reviews and later feeds it to his AI.
Next, he defines the output of AI. Basically, AI should output new reviews with the context of previously learned movies.
Context is defined as a positive or negative review.
So when you ask this AI to generate a review of Scott Pilgrim vs The World it would generate content with text that is
completely different than all reviews written in IMDb but the context of those reviews is remembered.
It’s important that this context is limited to the data sources
So this AI is capable of generating all reviews for all IMDb movies but reviews
are each time different. The thing is you ask your AI to make a review based on some
parameter. Let’s say the overall rating of the movie. AI is aware of this rating and
it always generates positive reviews for Scott Pilgrim vs The World
Should it be right for person A to do it? It does not repeat content with "samples"
but it repeats the context of the data. It repeats the general opinion of people
which is the intellectual content of IMDb
I made small React library which allow you to create sliders from any component you want.
This library should be fully compatible with any touch device which is supported by
Hammer.js
Library can be found in npm and
examples of its usage here
If you need to make changes feel free to make PR on Codeberg
Life got busier recently. I need to focus more on my thesis and work. When it comes to technologies I continued to learn GSAP but I try to focus more on CSS keyframes and CSS variables to not rely on JS for animations. Also, I invested some time in learning of Next.js
I have stopped writing on this blog since I host everything on my local server that I did not have time to configure after I moved to a new apartment. Now I hope I will share more small updates and maybe some interesting blog posts about general programming knowledge
Apparently, you cannot write this on LinkedIn as it’s not professional. But I don’t really care.
This is my blog and I can write whatever I want here. People in Ukraine are dying because of Putin
This is not acceptable. #fuckputin #fuckrussia
Here are a bunch of links to support Ukrainians with their fight for independence
Looks like FLoC from Google won’t be introduced to general public and company decided to make a new algorithm for personal add targeting. They not only changed name from FLoC (Federated Learning of Cohorts) which sounds scary to Topics but whole architecture is different. Is it good? Is tracking people online good? Depends who you ask but it’s good to keep an eye on this since it delays dropout of third party cookies functionality for Chrome browser which is now estimated for 2023
This weekend I was involved in an FB conversation about Live Coding and why seniors don’t like it
It all started from me reading article
that really got me thinking about the state of senior IT professionals. Adam the person who made this article
shared some points that seniors don’t like. I’ll also share them to give you an overview of what I think people with
experience in IT are scared of
They take a ton of prep time to nail - that’s true. You should prepare for a job interview but this was always like this.
Kids in school learn that to get better grades you need to study hard. So what? You want 6k euros a month but don’t want
to spend time studying?
They push senior engineers to work differently - I can tell you one thing. If you are a good specialist in the field
you constantly should pull yourself from comport zone and get used to it.
They don’t really test what you’ll want them to do once hired - The employer decides how
he wants to test your skills. You might not like it but this is how it is. I heard a lot of times that
algorithms that you write on codility are not something you will write on daily basis. That’s true but
there is a reason for your recruiter to ask you to make such an assignment. He wants to see how you think,
how you perform under pressure, and if you will give up. I think the last thing that is most important, you might
not like the requested live codding assignment but never give up.
They send a bad message - this one is about when you stress coding interviews in your hiring process, you make senior engineers second guess the role for which you’re hiring. I can’t even imagine a real senior developer that gets upset due to a code assignment to make.
To conclude I really think that the current state of IT professionals is a mess. People think they
deserve a lot of money without proper skills and because the culture of developers moved from
skilled professionals into script kiddies everyone is senior now
Last thing. I am a senior myself but when I compare myself to people I worked with
that had senior level I know which skills lack. I know how much knowledge is lacking and
what I need to improve to be a better developer. If I take live codding I try to show my best skills.
Many times I failed on some really basic things but I never
gave up and most importantly I took lessons from my mistakes to not repeat them
I found a great place to search websites with interesting content without the bloat
The search engine is called Wiby and allows to search websites which
are built similarly to those build in the early days of the web
On about page of Wiby there is quote why it was build
and I found it very relatable
In the early days of the web, pages were made primarily by hobbyists, academics, and computer savvy people about subjects they were personally interested in. Later on, the web became saturated with commercial pages that overcrowded everything else. All the personalized websites are hidden among a pile of commercial pages.
Don’t you feel overwhelmed by shit coming from the Google search engine? I can tell you
that I am. Every page I visit has tons of floating content, cookies popups,
newsletter subscriptions, and content that is usually not interesting at all.
A lot of these websites also have patterns that baits you to click. This is really
horrible and I’m getting sick of it. Recently I noticed that I click on things
that I really don’t want to but I do since it’s a behavior I developed
To not end this post sadly I will share one website which was found by Wiby.
It’s a website that was made by the person that was gathering old computer mouses
It might not look modern but the content is true and it looks like someone
really took the effort to write about it
I went back to PHP and since I haven’t written in it for a couple of years I thought it’s good to check the state of the CodeIgniter framework
API to save files is something I was not expecting since my blurry memory of PHP has information that file upload was a mess.
I though maybe I was the only one who could not get my head around and then I read
When you upload files they can be accessed natively in PHP through the $_FILES superglobal. This array has some major shortcomings when working with multiple files uploaded at once, and has potential security flaws many developers are not aware of.
But no more headache! New API is clear and extremely easy to use
I decided to move to VIM as my default "IDE" for university projects. I used VS Code for around 5 years now
and it was working great. I think now it’s the best editor in the world. Plugins work like charm,
there is support for almost every language and it’s blazingly fast
Why then I moved to VIM again? I noticed that VS Code is going in a strange direction. Like a month
ago I noticed that every time I opened it some strange login popup is shown. It’s not really described
where it points. Then I noticed that there is some special integration with Github that allows you to
log in via some token only when you use VS Code even though HTTP login is not possible now. This is only
possible with Github…
In general, I start to notice that Microsoft made VS Code free but as always free tools that come
from corporations don’t respect your privacy
Lastly, OpenAI developed Copilot that is again closed source and it’s trained on code that is
hosted on GitHub. I kind of feel like with this copilot OpenAI robbed programmers from
their work. On Copilot webiste there is a quote
Training machine learning models on publicly available data is considered fair use across the machine learning community.
Which I agree with but making machine learning algorithms on publicly available data should be
available for free. This whole field misses regulations and some companies clearly benefit from that
Other scary claims are:
If the technical preview is successful, our plan is to build a commercial version of GitHub Copilot in the future. We want to use the preview to learn how people use GitHub Copilot and what it takes to operate it at scale.
and
Not yet. For now, we’re focused on delivering the best experience in Visual Studio Code only.
Because of all of that, I decided to give VIM a try again. I configured COC, highlighter, linter, and
custom mappings. Everything seems to work great even though I need this setup for
PHP, JS, HTML, Elixir, Java, Bash, and Scala
Today I started to think why certain professors in Polish universities can treat students with no respect.
For an example, there are some classes in which you don’t want to ask questions as if you do you will be treated like a stupid person.
I think I know why it’s allowed and nothing is changing for decades.
Universities create a toxic environment due to the limitation of possible solutions to stop the harassment
I’m a developer for a very long time but also I started to work pretty early in my life. The main difference between work and
the university is that you can change work whenever you want not even mentioning writing complaints on your coworkers
Imagine having a professor who says "Women are not good IT professionals". What would you do if your coworker said something
like this? Obviously, you would make a complaint and if your boss would not care you could easily change the job. This is something
you can’t do at the university. You can’t easily change it since that would require you to move to a different city and probably
pass some classes again
Today again one of the students I’m in a group with was treated like shit and I cannot do anything about it or otherwise
I’ll probably have to look for some new university to study at…
Lastly, it’s not like students are without guilt but everyone who studied at Polish university will know what I’m writing about
I really don’t get that. There are many good alternatives like PostgreSQL or MariaDB and on every
semester that has some databases, the assignments are based on SQL Server. Good that at least there is Docker version
so I don’t need to install Windows anymore. Another issue is that sometimes it’s necessary to install SQL Management Studio that does not work on Linux.
I really wish universities move to OpenSource alternatives or at least services that do not involve heavy tools like this SQLMS. It would make
students' life easier and personally with 6 years of experience as a developer I never commercially had to use SQL Server as a platform to store data
Recently I had to make one of these flashy websites with scroll triggers and parallax effects. I did some research on which animation library to use and in even though there are some good ones like anime or react-spring I got hooked on GSAP. It’s a huge library but it really allows to abstract animation so coding with it is relatively easy and pleasant. For sure I’ll learn more about it