fake journal clubs, being wrong in public & the perfect espresso
CC#54 - Using GPT for Teaching, Effective Altruism Criticism & Small Teams
Hey there and welcome to ✨ CuratedCuriosity - a bi-weekly newsletter delivering inspiration from all over the internet to the notoriously curious.
Things I Enjoyed Reading.
🎭 My 2022 self (I don't know them) was very wrong about meditation, huge monitors, and... sleep.
I think it really is necessary that it becomes more acceptable/normal in the public to change one’s believes and thereby enable more collective learning. To me, this is a prime example of how that could look like.
When I read Richard Hanania’s Reflections on 2022 in which he listed a bunch of major topics he changed his mind about, I realized that I really admired him for doing this. Then, immediately afterwards, I thought “Wait, am I worse than Hanania? The person who made the entire twitter angry at him more times than everyone else dead & alive combined? If he can call himself out on his bullshit, so can I.”
Thus, this post. Briefly:
Meditation is terrible -> meditation is amazing.
Sleep minimum sustainable amount -> sleep enough to have maximum energy.
The more the bigger monitors the better -> one 16" monitor is perfect.
Note that a lot of what I previously believed is stupid, wrong or just doesn’t make sense. I can’t say I’m proud of any of that stuff, but, also, what else did you expect from a person whose claim to fame is being prolific in hot takes on twitter?
📚Fake Journal Club: Teaching Critical Reading
How do you teach active reading and research criticism? Interesting proposal for a ‘fake journal club’ based on (partially) GPT-generated science articles.
So perhaps we can break off the active-reading chunk and make a specialized Fake Journal Club (FJC) which focuses on teaching just that, for people in many areas, without needing to be impossibly expert in every single niche there is or will be?
I think Fake Journal Club should be possible, because active reading is something that can be done at any level of domain expertise. Even if you do not know much about an area, you should be able to understand if there are logical gaps in an argument or if it is getting the usual sorts of results, and learn more from reading it than a blind acceptance of its claims. (..)
Using real science papers is problematic. Trivia questions are super-abundant, extremely short, and no one will know them all, so calibration training can use them for rapid testing & clearcut feedback. Papers are long. How do we give feedback to a reader of a paper on their active reading?
🗣️ Tyler Cowen on Effective Altruism [🎧]
A critical presentation by economist Tyler Cowen on where he explains his thoughts on Effective Altruism. Followed by a discussion with other researchers - contains a lot of interesting and relevant perspectives.
Food for Thought.
🤓 While AI is often presented as something like a ‘threat to education’, similar like the idea of the ‘fake journal club’, it also offers a lot of opportunities to teach better, e.g. by encouraging students to create examples of certain applications of concepts and then have them criticise/evaluate those.
🧑🤝🧑 Small teams > big teams for launching things?
⚡️ I don’t necessarily want to praise Elon, but that is pretty impressive.


Random Stuff.
⭕️ Thought a lot about the reproducibility of data clustering lately and this xkcd comic is just spot on.
☕️ A short and entertaining history of home espresso machines and the never-ending hunt for the perfect espresso shot.

📹 An AI that creates summaries of Youtube videos - maybe useful for research on Youtube, but also to generate summaries for your own videos, podcasts etc or getting some upfront information whether its worth watching a specific video?

Personal Update.
no news at the moment, but all good on my end