Can developers actually keep up with the breakneck pace of generative AI? ExamPro Founder Andrew Brown joins Open at Intel host Katherine Druckman to explore the impact of emerging tools like DeepSeek, and what developers really need to know. He talks about agentic workflows, AI PCs, and open source accessibility for a crash course in future-proofing your skills. Read the full transcript for practical tips, fresh perspectives, and a few spicy side stories (yes, including kimchi).
“I think it's really important that we exercise the ability to make sure that we have open and free things, because if we don't, then big companies are going to build moats around them and we're going to be paying. Once that's all we use, the price goes up.”
— Andrew Brown, Founder, ExamPro
Katherine Druckman: Hi, Andrew, thank you for joining me. I really appreciate it. I know you're super busy. You record a lot of videos and podcasts. Do you do back-to-back all day talking to people?
Andrew Brown: I do the best that I can.
Katherine Druckman: That's amazing. So, Andrew, can you just introduce yourself a tiny bit to our podcast audience?
Andrew Brown's Background and Current Work
Andrew Brown: I'm Andrew Brown and I'm located in Canada. My background is EdTech, so I used to be a CTO for various different kinds of EdTech companies. Now I have my own company called ExamPro, and I teach people all different types of tech certifications, and I run community boot camps to upskill folks in the community.
Katherine Druckman: That's fabulous. I used to be a developer, a software engineer. I like to introduce myself as a recovering engineer, but I really appreciate the work that you do because in my past, I have benefited from that type of online training tremendously. I hope we have a lot of developers or current engineers that listen to this show. Can you tell us a little bit about what you're really excited to teach people right now?
Deep Dive into Generative AI
Andrew Brown: Everyone's current favorite trend, which is GenAI. I really didn't think that I was going to be going down the route of GenAI, but I got hooked when I found a personal use case for myself.
Katherine Druckman: Yes, always.
Andrew Brown: For language learning. All of my educational videos are me applying for something practical. So rarely do I ever make content that is not something that I'm already currently applying for some use case. I found a good use case as it makes it good for teaching. Now we're down the GenAI rabbit hole.
Katherine Druckman: Out of curiosity, what was that use case?
Andrew Brown: Right now, I am learning the Japanese language. It's just a personal challenge of mine to be able to speak a very challenging language. It was between Mandarin or Japanese. Most of my friends like Japanese, and so some of them are learning, and I had a possible group to learn with.
Katherine Druckman: That's very cool. You give a lot of training to developers on generative AI applications at this point. What do you tell them about things like evolving trends, keeping up with evaluating models to include in their applications? While they're immersed in learning how to do the development, how do you also explain, how do you keep up with all of the stuff that's going on? Because it is moving very rapidly.
Andrew Brown: GenAI is very turbulent because things are constantly changing. As emerging technology comes out, you have to move with it. But there are things that you apply immediately. Let's say DeepSeek comes out, you start utilizing it, but underneath there is a conceptual concept that you carry forward when you learn about it. As the industry matures, this is the same thing with cloud or when there was the change of web frameworks or anything, these conceptual things emerge. When we're talking about GenAI, people are now talking about agentic workflows or knowledge graph databases or… I can't remember the term that DeepSeek uses for their type of model, but it does reasoning where other models, you would have to orchestrate that kind of reasoning or thinking yourselves externally. And now it's built into the model. These things are conceptual and they continue, there's no reason not to learn them, but we just use whatever we currently have to apply those conceptual skills.
Katherine Druckman: I'm glad you mentioned it and I didn't have to. DeepSeek. Again, the pace at which things are moving, it feels like it happened a long time ago. It was only a few weeks ago, and that seems like a long time ago. But why were people so excited about DeepSeek? I have my own ideas, but I'd rather hear yours.
Andrew Brown: It could be that I'm wrong here, but it was the cost and savings for training inference. The idea was that if you had OpenAI, they would spend a considerable amount of money on compute or specifically AI inference and training as well. It was one of those key factors where they said, you don't need X amount in a big GPU farm. You can now do that with $6 million. Since that has happened, there has been misrepresentation about that information. Apparently, there were GPUs required to do the training side of it. You'd have to double-check the literature, but I do believe the inference cost is lower at scale. If all those things are true, then now that could introduce more competitors into the space that can compete with OpenAI because now they can work with capital to not just train, but infer their own custom models and build their own things around it.
Katherine Druckman: Well, the stock market sure agreed with you at least early on, although it seems to have stabilized a little bit.
Andrew Brown: There was a bit of a joke because for those that don't know. I was a bit opportunistic, and I produced a DeepSeek course for free on freeCodeCamp, and it really did well, extremely well, to no surprise.
Katherine Druckman: No, not at all.
Andrew Brown: But then shortly after, a very large GPU provider, their stocks had gone down. And so the joke is that my video might've driven their stocks down, but …
Katherine Druckman: Oh, no.
Andrew Brown: I don't think that's true.
Katherine Druckman: Probably not. No. It's a good joke.
Andrew Brown: But yeah, it's a funny joke.
DeepSeek and AI Model Costs
Katherine Druckman: It's funny. Just taking a little bit of a higher level view on something DeepSeek, how does it just change the conversation about training and inference and all of the other things that you talk to people about all of the time?
Andrew Brown: DeepSeek wasn't as interesting to me. The fact that it can do built-in reasoning is very interesting, because now you don't have to build out that stuff. But if you're building a custom agent, you probably want to build your own high-reasoning logic for your specific use case. At least that's my opinion. But I think that a lot of people thought that this really amazing model that can compete with OpenAI's model or surpass it in some benchmarks, was something that they could run locally.
I can't remember how many parameters it was, 874. There's a specific number of parameters, and to run that locally is nearly impossible there. There was one person that took a bunch of Mac minis and stacked them, and they distributed the compute across those eight machines, but they also had quantize. They simplified the algorithm, so technically they could get it to run, but what it could do, who knows? I can't imagine it would be a great performance, but the models that are more realistic for us to run on, let's say an AI PC. If you have one, it would be a 7 billion parameter model?
It is interesting to see it do the reasoning and it does open people's eyes in terms of how you want your model to think. But to me, it wasn't that interesting because I didn't feel like huge performance gains because I think you'd only see that at scale. Right?
Challenges and Opportunities in AI Development
Katherine Druckman: You mentioned AI PCs, and we talked about this before I hit record. Where is the most interesting conversation with regard to something like a device, an AI PC and the intersection of open source software? Does that democratize AI? Does it make this type of development more accessible? What are your thoughts on that?
Andrew Brown: I think it's really important that we exercise the ability to make sure that we have open and free things, because if we don't, then big companies are going to build moats around them and we're going to be paying. Once that's all we use, the price goes up. I think that people are motivated to learn and to utilize these things because once they find how useful they are, they go, "This is great, but what happens when that rug pull happens if and when?" And we've heard some of them say, I think it was OpenAI specifically that had said that they were going to increase prices at some point. They introduced that $200 plan, but the expectation is that the cost will get more. I think that developers, if we can give them that domain knowledge to get over that hump of utilizing models locally and those skills, then they're going to have the freedom to be able to utilize them as they want. These open-weight models are really important. So that doesn't happen.
Katherine Druckman: How does all of this translate to users of AI applications in terms of making all of this a little bit more private and personal and closer to the individual?
Andrew Brown: I think there's another aspect to that. We want to be able to run models locally because what if one day that service gets more expensive or vanishes because one country doesn't like this model. I found out through my bootcampers, we were learning how to use AI-powered assistance. And Meta AI is not available in specific EU countries, or maybe all of you don't know.
Katherine Druckman: That's interesting.
Andrew Brown: Because of how they collect data on it. Or maybe Meta AI has not been made available there. Or we heard about DeepSeek that might get banned in certain places. I think Korea was suggesting that they might ban it soon. So, when those things happen, you want to be able to keep using the thing that you're using. But the other aspect of it, are the capabilities that are going to be able to do on the edge. So opposed to having to manage cloud services, we're going to already have this raw compute. Why can't Rod, iGPUs or MPUs or things locally? Why don't we leverage those things and be able to do that? I think that there are technologies that are coming out that are making it easier. Don't quote me on it, but I think it's WebNN. WebNN is like a generic interface that is making it really easy to interface with MPUs and iGPUs locally. When those things become more accessible, I think it'll just be normal. Just as if you were to leverage HTML5, the webcam widget that leverages your local webcam, why not be able to do that as well with AI inference?
Practical Advice for Aspiring Developers
Katherine Druckman: I wanted to pivot back to the developers that you're training. Because of the role you play in fostering the next generation of developers or helping people improve their skills, what is all of this. We’re gathering a lot of data, we're moving really quickly, and when I say gathering a lot of data, I don't mean training models. I mean there's just a lot of information to take in.
Andrew Brown: Domain knowledge, right?
Katherine Druckman: Yes, exactly. Number one, how do you connect the developer with the knowledge that they need at the end of the day and what is it that they really need? I have my own ideas about what developers want. I think they just want to, from my own experience, close Jira tickets. Aside from that, when they're using a new skill, what do they want? What does this all mean to actual developers doing the real work?
Andrew Brown: I think the challenge in education, and this doesn't just apply to AI, but we have a lot of authorities that are creating curriculums and certifications that prescribe what you should learn to be, if we call it an AI engineer or an AI ops engineer. I'm one of the few that actually have gone and created free courses. I looked at Nvidia's, Intel's when they had theirs. AWSs, Azures, Google has one, there's one from the AI Institute, but it's not like these other certifications that are aligned well.
The challenge with all these other ones is that certifications are issued by large authorities, and they're kind of a mix of education, but they're also kind of a bit of a PR marketing tour. And so, they don't always align with real-world skills. Then you have small players that don't have the means to make certification courses. First, I obviously make certification courses, but the other reason why is that it's a very good driver to get people around a goalpost to work towards. You can make all these free courses online, but it's not organized and there's no end goal points and there's no assessment. What I've done is I've created my own course called the GenAI Essentials, and it goes across all providers. It does both the cloud and managed. We look at the hiring aspects and the emerging stuff, and we look at a lot of different tools, and we've been having quite a bit of success. I'm not trying to plug it, I'm just trying to say that…
Katherine Druckman: No, it's valuable information. You have a unique perspective.
Andrew Brown: …that I think that you can't expect that kind of education to come from any of these larger bodies. And we need one that is more democratized or at least is responsible for trying to balance the actual thing and work with people who are actually working in industry that are from different opinions to try to best reflect what the industry needs or wants.
Open Source AI and Developer Training
Katherine Druckman: So aside from taking your course, obviously, where else do developers start? What is a good point of reference? Where do you start as a developer? Maybe you have a lot of experience building other types of applications, but not generative AI, or maybe you don't, and you're starting from scratch.
Andrew Brown: A lot of the skills translate over. But the thing is, when you come into GenAI, you might have misconceptions of what you have to use. For example, everyone thinks they have to use a vector store because the first thing you learn is about a vector store, and that you tokenize your outputs and store it there, and you grab it and it can do similarity searches. People forget that you can also store data in a relational database, a graph database, and other various formats. You really have to think about your data and what structure you want to return it. People are trying to use a vector store like a hammer right now. I did that at the start because I just forgot. I had to think, "Wait, why am I storing here and what am I doing with it?" That is something that people have to overcome.
Another thing is that there is a lot of application integration or orchestration of services. This is still a skill that if you were in the cloud, you would definitely need to know how to do this or DevOps or things like that. I think a lot of people think they have to go to LangChain or LlamaIndex, which are opinionated orchestration tools. There's other ones out there as well. But you honestly can just take a framework like OPEA, which is just generic containers. It actually uses LangChain in it.
Katherine Druckman: Yes, makes sense.
Andrew Brown: And string those together. It looks very similar to how you'd set up a cloud native ecosystem if you were trying to do it for enterprise. Those skills are still translating over. I think that people have those false starts. On the development side of it, you're still programming. There are many coding systems that will do a lot of the work for you, but at some point you have to understand all parts of those codes and jump into the driver's seat, when it stops producing what you wanted to produce because it only gets you to a certain level.
Katherine Druckman: I wanted to jump in really quickly because you said OPEA, I wanted to make sure people know that that is Open Platform for Enterprise AI, in case anybody did not recognize that acronym.
Andrew Brown: Well, it can confuse people because in the cloud native space, and this is kind of a cloud native project in the Linux Foundation, but there's another one that is closely associated with it called OPA.
Katherine Druckman: There is another OPA project, yes.
Andrew Brown: Even when I try to say OPEA, I prefer to say OPEA. The other thing is how do you actually deploy these into production? This is another challenge: where do we have to use managed services? I think a lot of people are afraid to deploy their own LLM. There are some technical parts to it, but just like how we used to deploy web servers, if you only came into this world and you only ever use something like Vercel, Vercel's goal is to make it ... I mean, I'm not saying that they're doing this, but I'm saying that it's beneficial for them to think that deploying a server is really hard, a web server. But it's not, right?
You just have to invest that energy and time for those domain skills. I understand people don't, if they need to move something quickly, but having that flexibility and knowledge to know that you can go at a much ... you make that trade of what responsibility you want to have. The same thing with LLMs. Right now, LLMs seem really scary to deploy into production. They seem like there's a lot of moving parts. They're hard to understand, but they're not. It's just that once that domain knowledge is shared with everybody, which is what I'm hoping to do by making free courses. My GenAI bootcamp I'm running right now, we'll go into syndication, we'll be free. We're trying to build up thousands of developers of domain knowledge and then get that information out there. The things that seemed really hard are now just a copy-paste away from some Medium article.
Katherine Druckman: Yeah, that's it. I truly do appreciate what you do because I know a few other people who work in a similar field to you doing a lot of technical training, and you really do have the ability to change somebody's life. You teach them a skill that they can go out and make a living with. And that's no small feat. But I wondered if, again, in the process of all that, is there anything that you wanted to share that I have not asked you about yet?
Andrew Brown: Right now, I'm making white kimchi, so I'm back into making fermented kimchi. And I really like the spicy stuff, but it's not easy on my stomach. Wwe're giving it a go, but it's really stinking out my house, so my wife wants me to throw it out.
Conclusion and Final Thoughts
Katherine Druckman: I'm really curious about that. But I really do appreciate also your message about just flexibility and staying agile in this process, not just in the learning process, I think, but also the development process. I think we as humans and as developers need to remind ourselves constantly that things are not necessarily written in stone. And I think that's sometimes hard for us.
Andrew Brown: Well, and also right now, nothing is written in stone with GenAI or AI technologies, and that makes it hard because people don't have something that's super authoritative saying, this is the way to do it. And the people that are willing to just say, “Hey, this is the best I could do, and it works”, are the ones that are going to be codified as the way to do it until someone actually points that they're incorrect. There's actually a lot of opportunity for people to pave their way. For me, it's really exciting because I've seen this. This is the third time I've seen this opportunity, and people are scared because they think they're going to lose their jobs because they have to learn a new skillset, but you actually have more opportunity right now than ever to advance in your technical career.
Katherine Druckman: I see it that way too. I think it's very exciting. I think we are breaking new ground. We're all pioneers together. And like you say, there is no one right way. We all get to participate in influencing what the right way will become, and that's very exciting. I'm slightly biased because I do work for Intel, and we're heavily involved in the OPEA project, not to be confused with open policy agent, which is a different OPA. OPEA, it's exciting because it does, I think, help not only actively do the work of figuring out what the right answer is and then share that as a reference to other people who are struggling. That's one of the greatest values that I see in a project like that.
Andrew Brown: I think Intel needs to give themselves a bit more credit because it is now owned by the Linux Foundation, and so Intel is just still putting the energy into that project to make sure it's available for not just Intel, but other target providers.
Katherine Druckman: And Open at Intel. Here we are.
Andrew Brown: But I think a lot of projects are like that at Intel, right?
Katherine Druckman: Yes. We support a lot of open source software. Well, that's why I'm here, frankly. That's what I do here. I get excited about it, participate in it, and then I like to go and talk about it. I really appreciate it. Again, I cannot emphasize this more, but I appreciate the work that you're doing. Where can people find your generative AI boot camp and other training?
Andrew Brown: It might not be open source, but it definitely is an open course, and you can find it. We have a website for the GenAI boot camp at genai.cloudprojectbootcamp.com. A great place to find my stuff is generally on my GitHub.
Katherine Druckman: Fantastic. Thank you very much. I hope some of our listeners can take advantage of that. And keep doing what you do. It's really awesome.
Andrew Brown: Ciao, ciao.
Katherine Druckman: You've been listening to Open at Intel. Be sure to check out more about Intel’s work in the open source community at Open.Intel, on X, or on LinkedIn. We hope you join us again next time to geek out about open source.
About the Guest
Andrew Brown, Founder, ExamPro
Andrew Brown is the founder of Exam Pro where he creates training materials for developers. He also creates free cloud certification courses for freeCodeCamp.
About the Host
Katherine Druckman, Open Source Security Evangelist, Intel
Katherine Druckman, an Intel open source security evangelist, hosts the podcasts Open at Intel, Reality 2.0, and FLOSS Weekly. A security and privacy advocate, software engineer, and former digital director of Linux Journal, she's a long-time champion of open source and open standards. She is a software engineer and content creator with over a decade of experience in engineering, content strategy, product management, user experience, and technology evangelism. Find her on LinkedIn.