“Deepfaked Celebrity Ads Promoting Medicare Scams Run Rampant on YouTube,” n.d. https://www.404media.co/joe-rogan-taylor-swift-andrew-tate-ai-deepfake-youtube-medicare-ads/
Deep fake celebrity scams are flooding YouTube advertising. Google does not care much and continues to collect revenue from it. It’s interesting to see how other people tracked companies that create these deep fakes, but for now they are continuing to produce this bullshit. I wrote about it some time ago on Mastodon, where I mentioned that YouTube does not care about it as their whole profit comes from advertisements. Why would they fix it if this generates revenue? It’s USA capitalism, baby. Milk it till it bleeds. Yeeeehaaaa! (Pathetic)
“Who Needs Adobe? These Design Studios Use Free Software Only.,” n.d. https://notes.ghed.in/posts/2022/free-software-design-studios/
Interesting blog post about artists who switched from proprietary software to free software. I like that they not only show the reasons behind it but also the experiences they went through. Blog posts also have some quotes from interviews. I think especially the last one is worth checking up on. It’s about how artists can shape the software with small contributions and the feeling associated with it.
“Groceries Persistently Expensive; Full Shopping Trolley Costs €132 on Average,” n.d. https://nltimes.nl/2024/01/09/groceries-persistently-expensive-full-shopping-trolley-costs-eu132-average
We pay huge amount of money for groceries in Netherlands. I was looking for a reason for that since inflation went down, same electricity prices. Now I finally have a reason why they are snoop dogg high. "Energy prices are now lower, but supermarkets and suppliers are tied to contracts. So those costs will remain high for the time being." Time being seems to not be defined. #eattherich
“XState,” n.d. https://stately.ai/docs/xstate
Js and Typescript library which allows to make complex state managmwnts with usage of state machines and actors. It looks pretty nice!
“OpenAI and Journalism,” n.d. https://openai.com/blog/openai-and-journalism
OpenAI responds to NYT lawsuit. They mention that their work supports journalism and its not a big deal that their model memorise text. Well I think they should ask ChatGPT if the memorisation is not a critical issue which should prevent them from taking money and selling GPT as a tool.
“‘Impossible’ to Create AI Tools like ChatGPT without Copyrighted Material, OpenAI Says,” n.d.
Then pay for each text you gathered on your servers. This is getting ridiculous…
“Copyright Expert Predicts Result of NY Times Lawsuit against Microsoft, OpenAI,” n.d. https://www.msn.com/en-us/money/companies/copyright-expert-predicts-result-of-ny-times-lawsuit-against-microsoft-openai/ar-AA1mA6L0
A bit more information about the NYT court case. Experts review the NYT court document and suggest possible end results. \textless/br \textgreater tldr; case is strong; they can win, but probably models will stay as they are. All depends how lawyers will show the case to jury. "A lot of this is about persuading the courts of your vision of what generative AI looks like."
“Does GPT-2 Know Your Phone Number?,” n.d. https://bair.berkeley.edu/blog/2020/12/20/lmmem/
GPT models seem to "memorize" training data, which is proven by not only the NYT court case but also BAIR. For me, this is not shocking news. I have mentioned this many times already. Companies like OpenAI and Microsoft lie to us when it comes to the legal usage of web-scraped data. Current LLM memories are a lot, and because training sets are no longer shared with other researchers, it’s getting harder and harder to track what those companies are doing.
“Duolingo Laid off a Huge Percentage of Their Contract Translators,” n.d. https://twitter.com/rahll/status/1744234385891594380?t=wscwWLsx12JzGFVq9wccTw
Duolingo is firing people due to AI enhancements. Some of their current courses are insanely bad. The Dutch one seems to be purely generated by AI and that was one of the reasons I had to change the app to learn this language. Verdome
“30 Years of Decompilation and the Unsolved Structuring Problem: Part 1,” n.d. https://mahaloz.re/dec-history-pt1
Blog post about the history of decompilation research. If you are interested in how to get C code out of compiled code, I think this is a good introduction. The author identifies three core pillars of compilation. "CFG recovery (through disassembling and lifting), Variable recovery (including type inferencing, Control flow structuring" and describes how the basic decompiler recognises these patterns.
“AI versus Old-School Creativity: a 50-Student, Semester-Long Showdown,” n.d. https://blog.still-water.net/ai-versus-old-school-creativity/
Some university students were asked to use AI for various tasks during the semester. Tasks differ from text and image generation to asking GPT4 for grades for their work. In general, people were struggling to use it. What I find really interesting is that they were writing essays, and everyone was surprised that GPT does not accurately print citations. I mean, wasn’t there a professor who actually described how this AI works? XD Nevertheless, the interesting topic and post is worth reading.
“SSH Based Comment System,” n.d. https://blog.haschek.at/2023/ssh-based-comment-system.html
Interesting blog post about the website comments section, which is handled via an SSH connection to the server where the site is hosted. When you think about it, it’s super complicated, and probably most users of WWW would not be able to comment with such technology behind it. Which is true. But I like how the author mentioned that this additional step might actually filter out low-quality comments and decrease current spam messages created by automated software that is focused on platforms like Disqus.
“Lambda Lambda Lambda,” n.d. https://brevzin.github.io/c++/2020/06/18/lambda-lambda-lambda/
The author compared the length of lambda expressions in different programming languages. Later, he showcases why Clambdas are so long. The blog concludes with a funny quote. "But that’s a huge digression from the main point of this post, which is quite simply: Chas really, really long lambdas."