I recently came across a post on X, which shared this image of stack overflows traffic since ChatGPT was released.

This spiked my interest because one of the main concerns I’ve had about the release of ChatGPT and the proliferation of public LLMs is what will happen when we become so dependant on them, that we stop sharing knowledge in the traditional way.
We all harbour a bit of imposter syndrome, and it takes courage to ask a question which might seem obvious on a forum full of experts. The invention of LLMs like Chat GPT gives us access to content from these forums in a much friendlier and more convenient way. ChatGPT is very polite and would never tell you in a rude manner that the question you are asking has already been answered and that you should do a better job of searching the forum in future. I am not having a go at StackOverflow here, it’s just an example of the type of behaviour one can find on pretty much every knowledge sharing forum.
The question for me is, if people stops going to these forums to ask questions, and they slowly fall into disrepair, then where will the LLMs get their training data? As the forums slowly become less and less used, the knowledge base will slowly degrade over time, which will in theory also degrade the LLMs. Once these platforms have lost their network effect, then it will be very difficult to build this up again. Could we be inadvertently heading to a digital dark ages, where we forget to how innovate?
StackOverflow posted recently about the above data, to debunk the claim that their traffic had dropped by 50%, indicating that this was more like 5%. (You can read their post here. ) In their post they address the use of LLMs and they suggest :
- Developers will use LLMs for more simple problems and come to SO for more complex problems as they become more knowledgeable.
- LLMs will democratise coding which will bring more developers to the domain and therefore increase the use of forums like SO.
- LLMs still have a habit of hallucinating which means developers will need to come to SO for trusted solutions.
I tend to think the above is a combination of wishful thinking and maybe a bit of public relations to appease their investors.
This problem also applies to organisations who may find that their internal bodies of knowledge begin to atrophy as their teams adopt LLMs. I believe every organisation should have an internal LLM capability and should also have an initiative which is tasked with harmonising their internal knowledge bases with LLM functionality in a way that manages how curated knowledge transitions into the LLM knowledge base. This should help avoid the hallucination problem and also utilise the experts in your organisation to stay close to the knowledge base and keep extending it.
LLMs have the potential finally solve the knowledge management problem we have been struggling with for so long, by making it really easy to interact with knowledge bases in a way that we haven’t been able to in the past. We just need to be careful that we don’t abstract ourselves away and become so dependant on the LLMs that we slowly end up in a civilisation that is no longer creating and curating knowledge for itself.
I believe StackOverflow behind the scenes will be treating this as an existential threat. They seem to be in the game already with their own LLM product OverflowAI. Hopefully they can build this in a way, that synthesises the benefits of both the forum and the LLM an manage to harmonise the platforms into a superior use case that will keep developers returning to the platform, and especially expert developers by providing an incentivise to keep them adding content through solutions to the more complex problems so that we keep expending our body of knowledge. They really are in the best position to do this, and I can imagine how they could achieve this through their UX.
Leave a Reply