Today I was invited by the UAM University to discuss AI and the impact it may have on the economy at the 2025 Annual Public Policy Colloquium. It was a blast to discuss both AI and Economics. I am extremely thankful to Dr. Pablo López Sarabia for the invitation.
The video is available here
I have released a small update to Local Intelligence to fix a small but critical bug. It turns out that some models have a context window smaller or larger than the values that can be configured using the slider and that could cause a crash. Everything should be working fine now. The AppStore version was submitted yesterday and should become available during the day.
I have recently released version 1.1 of my Ollama front-end for macOS. The new version fixes some minor bugs and adds several new features:
However, the real news is that I have decided to release the app on the Apple App Store to ensure wider distribution. The sandboxed version comes with a significant limitation, as the app will not be able to connect to STDIO MCP servers. Since this feature is important to many developers, I have decided to continue offering the notarized version on my site.
The decision to offer my latest app on the AppStore also forced me to change the app’s name and icon. Ollama has been contacting developers who have used their name to ask them to desist and I didn’t want that to happen to me. So now, the app is called LocalIntelligence. I would have preferred something else but it seems that all the cool names were already taken.
AI is moving ahead very quickly. Agentic AI has finally allowed generative AI to break out of the chat box and interact with the real world to obtain real-time information and act by connecting to APIs. MCP has proven to be a significant breakthrough and applications are popping up everywhere. That said, this is just the beginning, as MCP-UI, an emerging AI standard, has the potential to change the way we browse the Internet and find information. In the future, proactive AI will revolutionize the way enterprises are run. These are just some of the topics I discussed at the 6th Metropolitan Forum in Mexico City, an event organized by Mexico’s largest university, UNAM.
I really enjoyed discussing all these topics with the students in attendance and was able to cover most of the subject in the time I had been allocated (1 hour). That said, there were many more things I wanted to discuss in depth, so I recorded a video and published it on YouTube.
The video is in Spanish and you can see it here.
Today I have released a new application, OllamaChat. This is a lightweight native Ollama client for macOS. It allows to easily use a GUI to chat with any LLM installed on your local (or if you want, remote) Ollama instance.
This is a project I started because I wanted my team at IBM to learn more about how AI works and allow them to easily perform demonstrations of IBM’s Granite models even when an Internet connection is not available or can’t be trusted (which is frequently the case at customer sites or during public events).
OllamaChat doesn’t just support regular or reasoning models. It works also with vision models (just drag and drop an image to start chatting with it). You can also use embedding models to understand how text is converted into vectors.
Finally, OllamaChat also supports MCP. This is actually the reason I built this app because I couldn’t find an easy way to demonstrate Agentic AI on my computer without having to install a lot of bloated software. Right now OllamaChat works with local STDIO MCP servers or remote (TCP/IP) unsecured servers. I plan to support secure servers in the future but I have decided to release this version because it is already very useful at this stage.
You can download OllamaChat here
I was invited by the National Autonomous University of Mexico toggle the keynote speech of their first AI international congress. It is always a pleasure to work with universities and share points of view of where the industry is moving. On this occasion, my talk wasn’t technical, it focused on the global effects we may witness in the close future.
You can watch the conference here.
I have been extremely fortunate to join Informix Software in 1997, back when Data Management was still an emerging discipline and Data Warehousing was in its infancy. That has allowed me to see the evolution of Data Analytics and understand why the structured and unstructured data explosion have created hard to solve challenges that have been addressed by multiple groundbreaking technologies like No-SQL databases and architecture decisions that have allowed us to handle vasts amounts of data.
However, younger architects and IT Specialists do not always grasp the whole picture and therefore do not always understand how we finally got to this point.
That is why I gave a series of seminars to my LA technical team back in early 2024. Eventually I decided that since there wasn’t much information available on this subject (in Spanish), I would record a video about it.
You can watch the video here.
I just released a new video on my YouTube channel to explain (in Spanish) how to freely use IBM’s Granite-code with VS Code. Since it is a fairly simple process, the video is quite short. Granite-code is a fairly simple completion model with no reasoning capabilities, and therefore is quite limited in its functionality, but it still helps achieving increased productivity and supports over 100 programming languages. For a FREE model it is hard to ask for more.
Watch it here
Artificial Intelligence is changing our daily lives and will transform our lives in less than a generation (hopefully for the better). Everyone knows it. However, I have found that many of my developer friends do not know exactly where to start when it comes to infuse AI in their apps. That is why I published a video on YouTube (in Spanish) explaining all the basic concepts.
In this video I talk about transformers and how to represent complex data into vectors which can then be stored in a vector database. In my case I use SingleStore, a product that I really enjoy and recommend. I have developed all the demo apps in Swift so that they can run nicely on my laptop with few dependencies. It was also a great excuse to sharpen my Swift and SwiftUI skills.
Watch it here.
A couple of weeks ago I was invited by Meta to discuss IBM’s point of view about the Metaverse. While I don’t believe the technology is fully ready for prime time, the opportunities are intriguing and certainly worth exploring. From a back-end developer perspective, AR/VR are simply new channels that companies can enable to keep in touch with their customers wherever they are. While I do not believe that customers will spend many hours continuously wearing AR/VR headsets (at least the ones currently available), there are certainly tasks that will be best executed in a VR or mixed reality environment. That is where IBM will provide the technologies and tools to enable companies build realistic worlds that will appeal to their customers, using all kind of technologies such as scalable micro services, avatars, AI and new generations of chatbots. You can listen to the complete interview (in Spanish) in Episode 10 of the “Hablemos del Metaverso” podcast.