Links and Notes - January 20th 2025
Copied information online needs to go away
The article, "The creative world's bullshit industrial complex" continues to cement itself as one of the greatest articles ever for maintaining relevance in the long run.
Whenever I open X, threads, or LinkedIn, or any social platform where the goal is clout chasing for some people, I find myself inundated by threads which share some kind of knowledge about a person or an interesting topic. Not bad information mind you. It's just ripped and reformatted. Ripped from YouTube. Or a blog. Or a book. And repurposed as if it's original content. I can't make a strong argument to say it shouldn't happen at all. But it just feels sleazy and wrong when people present information as if it's their own. It's not even that they are merging multiple pieces of content to weave a story. Many times it's just a retelling of some original content piece. The references to the source content are few and far between and when there is a reference it's quietly stuffed at the bottom of the long thread after the "if you like this, don't forget to follow or subscribe" post.
I think I'd like to see any kind of content repurposing be done with at least an upfront referencing of the original content and admission of content repurposing. "Today I'd like to summarize this video that shares the story of X".
This doesn't go for just text as well. YouTube is full of these and with the advent of short formats it just feels like it's gotten worse and is everywhere now.
Here's an example of this in action. This is a video discussing Jayden Daniel's training using VR - which in itself also seems to be a reproduction of the news articles here and here The articles from NY Times seem to be the only truly original reporting honestly since most of the source videos in the YouTube video are also from videos referenced within the article
Then there's this twitter thread which is basically a reproduction of the video as a youtube short but in text - https://x.com/thetoddjacob/status/1881059079449391270
This type of behaviour has come to the point where whenever I read some random person's thread detailing the history or background of some interesting topic, I now default to assuming that this is copied information. My only goal then is to try and find the source and skip the 3rd parties entirely.
Truly, the bullshit industrial complex is alive and growing
But the promise of VR?
I will say this though. Ignoring the issues of low energy media repackaging, I'm loving that the promise of VR is coming to light more and more. I've been a huge believer in the tech and got the quest 3 as soon as I could get it in Sri Lanka. It took me a while to get used to using it primarily because I wear specs. But once I got my pop in lenses I've found that I can use it as my primary work medium the entire day. Work with a virtual monitor in VR. Play very realistic table tennis as a break. Fly planes. Use it when driving in the sim. And I can see all the potential in the world for it even if I use it for more simplistic things. The article and video in the previous section show just how much can be on the line in a world where this tech is normal.
Said tech has ways to go though in being made smaller and more accepted when people are out and about. The promises of it will however remain very real no matter how long mass adoption takes.
All promises except the one where it would be a gateway to the metaverse. I will remain sad that despite having the vision for VR as a computing medium, the company managed to flub its release so badly by hitching it to the horse that was the metaverse and expecting it to behave like a wagon. The metaverse concept is interesting. But it cannot be dictated by any one computing medium at this time I think. I would love to see it come to fruition, though more in the form as imagined by Shaan Puri instead of Ready player one.
The fall of an empire?
It's really weird watching the US presidency shenanigans at the moment. Firstly, the election of a man who has been overtly corrupt and just downright disgusting and showed that he would destabilize the legal institutions in order to get what he wants has been a head scratcher for me. I don't want to make claims that the democrats put up a great candidate. But the existence of Trump and the cultish tribe that follows him baffles me. America has always struck me as a country that is deeply flawed in many humanistic ways but is absolutely adamant on upholding certain principles. Their commitment to certain freedoms is impressive from the vantage point of a person who has lived in a much less free environment. Flawed. But impressive.
To see a country cede those advantages so willingly just so that their tribe wins has been bizarre to watch. And now I'm looking at a president of the most powerful economy in the world use their position to launch a meme coin feels like such an obvious abuse of their position. But the country laughs and goes along with it. The court rules in favour of blocking TikTok, but in what seems a case that's clear as mud, they come back online because Trump has assured them protection in the future?? I don't understand? How is this blatant abuse of power not being checked?
It scares me. I don't want my son to experience a world at war. I don't want him to have to experience a world of turmoil as an empire eats itself to death. A complete own goal. An unforced error of great magnitude. I certainly hope this isn't the fall of an empire that I must witness. I wish it need not happen in my time.
The rise of another world?
Today is a day for questions about the future. I've been seeing rumbling after rumbling over the past week or so on X about Open AI reaching some kind of terminal velocity essentially with their AI. Basically the idea that the AI has reached the point where it can train itself and release it's next version which then trains itself again.
Not so long ago, all I heard was the idea of model collapse where if the AI tried to train itself on synthetic data, material generated by itself, it would result in a model that would collapse into nonsense eventually. Now to hear that the model might be displaying emergent behaviours and this idea that we could be looking at a model that controls and grows itself is not a concept I'm ready to grapple with. I don't want a timeline in which my son watches nations go to war with each other using nukes. I also don't want my son to have to grow up in a world where people are trying to figure out how to be useful in a world that is run by AI agents.
That said, I don't like to buy into hype too easily either. I remember project Q* that had a similar hype cycle where the model was supposed to be hitting some kind of god like behaviours. That didn't pan out to the expected levels. Although people are waking up that chatter again with questions like "Is this what Ilya saw?". Also, as much as I can see the rumblings from Open AI folks and adjacent researchers I also can be skeptical since I'm aware that things like contracts with Microsoft are on the line. That too is a funny story where Microsoft is supposed to lose access to OpenAI tech once they hit AGI but there's a financial definition to it too where they are supposed to have a system that can generate 100 billion in profits. From the Tech Crunch article,
The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.
The wording there is a bit iffy though. "can generate" vs "has generated" are two very different terms. But I digress. Outside of contractual loopholes, there's also the website of OpenAI which is still showing 196 openings at this time out of which at least 114 are engineering aligned. A world which has achieved some kind of super intelligence does not seem like one which would have this many positions open.
And of course there's the usual stuff. Scroll down enough in the feeds of even the most ardent fan theorists of Open AI and you'll see the posts amidst the hype that say "why is the hallucination so bad?" or "why is the model generating more text but less useful stuff?" or things along those lines. There's a juxtaposition here of extremes.
But on the areas of worry, this article from axios has a few extra signals that do point to something. Like this quote here:
We've learned that OpenAI CEO Sam Altman — who in September dubbed this "The Intelligence Age," and is in Washington this weekend for the inauguration — has scheduled a closed-door briefing for U.S. government officials in Washington on Jan. 30.
So while it's not guaranteed to be the real deal, it's not nothing either. And that worries me a little. Good or bad or in between, I hope we find out soon what this new world(?) looks like. Working with unknowns drives me mad.
This blog doesn't have a comment box. But I'd love to hear any thoughts y'all might have. Send them to [email protected]
Previous links and notes (October 17th 2023
Posted on January 20 2025 by Adnan Issadeen