Quite some reads this week!
“First UK Patients Receive Experimental Messenger RNA Cancer Therapy,” n.d. https://www.theguardian.com/science/2024/feb/04/first-uk-patients-experimental-messenger-mrna-cancer-therapy-treatment?utm_source=ground.news&utm_medium=referral
It might be possible that these new methods will have a higher chance of curing cancer and will be less invasive for the human body.
I also would like to notice that these advancements were made by a huge number of dedicated people who decided to study instead of praying.
“Towards a Folk Computer,” n.d. https://folk.computer/notes/tableshots
Sounds insane and amazing?
Well, from what I see, it is! The example with the button is simply mind-blowing!
It’s all based on AprilTag, which apparently is a QR code for robotics. From what I read, parsing and generating such tags is super fast. For sure, something worth reading.
“The World’s Most Responsible AI Model - (HAHAHAHAHHAHAHAHAHAHAHAHAHAHAH),” n.d. https://www.goody2.ai/
They apparently made AI which is
’GOODY-2 is a new AI model built with next-gen adherence to our industry-leading ethical principles. It’s so safe, it won’t answer anything that could be possibly be construed as controversial or problematic.’.
So I scrolled their website and looked for the actual model or paper with some validation. There is none.
Now, after playing around with it, I’m not sure if it’s a meme or actual product. Probably it’s a meme and I got the bait.
“The Effect of Sampling Temperature on Problem Solving in Large Language Models,” n.d. https://arxiv.org/pdf/2402.05201.pdf
What I find spooky is how good GPT models are in comparison to LLAMA models. It’s almost suspicious that they used these validation datasets for training.
If this would happen now researchers could check this up?
“Pg-Diff,” n.d. https://michaelsogos.github.io/pg-diff/
I’ll post it here, so maybe in the future when I come back to working with Postgres this could be useful.
“(Almost) Every Infrastructure Decision I Endorse or Regret after 4 Years Running Infrastructure at a Startup,” n.d. https://cep.dev/posts/every-infrastructure-decision-i-endorse-or-regret-after-4-years-running-infrastructure-at-a-startup/
Some personal notes:
- monthly cost tracking meetings - I really like this idea and I think more companies should at least record them and summarize the output from such meeting in some LLMs. It’s good to have an overview of how much it’s spent on infrastructure and update it with some intervals.
- multiple applications sharing a database - this is something I noticed in many commercial projects. It feels like this mistake is not avoidable. When a company quickly grows, there is not enough time to create a proper DB architecture
- not using a network mesh (as no regrets) - I fully agree with author’s opinion on this. Network meshes are fantastic but the complexity they involve quickly gets insane to maintain. What I would suggest is to start deploying to k8s and keep track of your microservices. If there ever is a need to use service mesh keep the door open but do not start with it.
And that’s it! There are more things in it, but these are the ones I found the most interesting. What are your opinions on that?
I don’t use any comment section on my blog, but please don’t hesitate to PM me on my mastodon!
“ASCII Smuggler Tool: Crafting Invisible Text and Decoding Hidden Codes,” n.d. https://embracethered.com/blog/posts/2024/hiding-and-finding-text-with-unicode-tags/
It´s a tool that allows you to inject invisible text that probably won´t be visible in your application into the LLM.
You can use it to prompt inject nasty stuff, but maybe you can also use it to actually mess up the training of LLMs.
Just add at the end of your blog post 1000 lines of hidden text in various techniques and see how OpenAI in 2 years speaks ASCII Chinese.
“Large-Scale Generative AI Models Lack Visual Number Sense,” n.d. https://arxiv.org/pdf/2402.03328.pdf
Actually, this is pretty easy to estimate. Most of the people label images up to some number. I can’t expect people to label 12 people on the screen. For now there is no counting objects mechanism that I heard of, and it’s all done on training data that probably do not have infinite number of labels (which is by the way not possible).
“Universal Syntactic Structures: Modeling Syntax for Various Natural Languages,” n.d. https://arxiv.org/pdf/2402.01641.pdf
What is innovative in this paper is the introduction of "Synapper". This synapper is like a graph which connects words in a way that allows to detect ambigious ones.
It’s all created by cycling through different word orders within different sentence classifications like: declarative, interrogative, imperative and exclamatory.
Different languages have different order. English for example rely on SVO (subject-verb-object). In Japanese it’s SVO.
It’s important to know that human language is hard due to the difference between syntax and semantics. One sentence can have different syntax but the semantics are the same so we would understand the sentence the same way.
From what I see, this article paper is not reviewed yet and clearly needs to be. Some of the claims it has need to be fact checked. Maybe it’s possible to give examples which disprove the created graph.
This paper also has some good comparisons of how LLMs generate tokens and how it’s different from humans. One is based on probability of the next token and other decode the data and puts it in abstract meaning (whatever this means for the author).
Also authors give example of some person who did not manage to learn language and still was intelligent. She could express her thoughts just not by using language.
In general it was a long read and I’m tired. I highly recommend reading it though. Maybe with more papers like this we could improve LLMs and make them intelligent.
“Water Reflections,” n.d. https://jamesgurney.substack.com/p/water-reflections
Basically when you draw things in water you should invert them quite a bit and make the colors a bit dimmer.
“Vastaamo Hacker Traced via ‘Untraceable’ Monero Transactions, Police Says,” n.d. https://www.bleepingcomputer.com/news/security/vastaamo-hacker-traced-via-untraceable-monero-transactions-police-says/
First of all good work! Fuck this guy and any hacker who steal from public services.
Second of all, I don’t think it was done via Monero transaction. Probably, he made some mistakes when he changed his strategy from demanding money from clinic to demanding money from clients.
Spooky stuff though. Maybe we should all look for some more anonymous coin than Monero?
“Repairing (Sort of) a Dyson Fan Remote Control,” n.d. https://blog.jgc.org/2024/02/repairing-sort-of-dyson-fan-remote.html?m=1
Author had issue with his remote control to some Dyson fan. The problem was that it was draining the battery too quickly.
After dismantling the remote control, he found that it was a broken capacitor between the battery. In the end he removed it completely as he had no spare parts.
What is also interesting this whole remote was not made to be opened so after he opened it the case got broken. Now he needs to have his superior DYI case that I find insanely creative!
I think more companies should not seal their devices with glue to prevent repairs. It’s such a shame that whenever something is broken we need to replace the whole device…
“The Pain Points of Building a Copilot,” n.d. https://austinhenley.com/blog/copilotpainpoints.html
The most important part for me was the one about testing. The author mentions that LLMs tests are ’flaky’ and it’s hard to guarantee that a new version of model will actually secure previous results.
Well, this topic is actually work reading about. How to test LLMs when their output is simply not deterministic? Is there even a way to test it? Is it worth to test it?
Author mentions that some LLMs integration developers create large benchmarks that are used to measure how prompts. Sadly, some of those fail when new versions of models are introduced. Interesting world we live in.
“Characteristics and Prevalence of Fake Social Media Profiles with AI-Generated Faces,” n.d. https://arxiv.org/pdf/2401.02627.pdf
The authors estimated that 88537–17864 users on Twitter are artificially generated.
From other interesting stuff in this paper, researchers characterised forms of activities that were characterised by multiple bots.
These activities are: impersonation, scanning, scanning, coordinated amplification, automation, and verification.
Lastly, the authors present an interesting way of detecting if an image was created by AI. They measure GANEyeDistance metric, which, from what I understand, is related to the space between eyes.