Today I released a new AI demo to show the power of Transformers. This latest app is called FaceRecognition and can be downloaded from my AI demos page. The app compares any person’s picture you drag and drop onto it with a database of almost 13,000 images to display the closest matches. The app uses macOS APIs for face detection and the FaceNet model for face recognition. It also uses other models to evaluate gender, ethnic origin age and expression. Every time I work on a new AI project I am more amazed by what this new technology has achieved.
This is a minor release with one small but useful feature. I added a slider in the General pane of the Settings window to set the default context window size that will apply to all models. Previously, this value could only be set on a per model basis. The App Store version is currently under review but should be available soon.
Today I have released three new macOS AI demos:
I have developed these demos for my team at IBM to help them easily demonstrate complex concepts to our customers. All three demos require a SingleStore account because each demo uses a different table to store its data and also require a container running on OrbStack to provide the Python web services the apps use to connect to the database. Enjoy!
Today I was invited by the UAM University to discuss AI and the impact it may have on the economy at the 2025 Annual Public Policy Colloquium. It was a blast to discuss both AI and Economics. I am extremely thankful to Dr. Pablo López Sarabia for the invitation.
The video is available here
I have released a small update to Local Intelligence to fix a small but critical bug. It turns out that some models have a context window smaller or larger than the values that can be configured using the slider and that could cause a crash. Everything should be working fine now. The AppStore version was submitted yesterday and should become available during the day.
I have recently released version 1.1 of my Ollama front-end for macOS. The new version fixes some minor bugs and adds several new features:
However, the real news is that I have decided to release the app on the Apple App Store to ensure wider distribution. The sandboxed version comes with a significant limitation, as the app will not be able to connect to STDIO MCP servers. Since this feature is important to many developers, I have decided to continue offering the notarized version on my site.
The decision to offer my latest app on the AppStore also forced me to change the app’s name and icon. Ollama has been contacting developers who have used their name to ask them to desist and I didn’t want that to happen to me. So now, the app is called LocalIntelligence. I would have preferred something else but it seems that all the cool names were already taken.
AI is moving ahead very quickly. Agentic AI has finally allowed generative AI to break out of the chat box and interact with the real world to obtain real-time information and act by connecting to APIs. MCP has proven to be a significant breakthrough and applications are popping up everywhere. That said, this is just the beginning, as MCP-UI, an emerging AI standard, has the potential to change the way we browse the Internet and find information. In the future, proactive AI will revolutionize the way enterprises are run. These are just some of the topics I discussed at the 6th Metropolitan Forum in Mexico City, an event organized by Mexico’s largest university, UNAM.
I really enjoyed discussing all these topics with the students in attendance and was able to cover most of the subject in the time I had been allocated (1 hour). That said, there were many more things I wanted to discuss in depth, so I recorded a video and published it on YouTube.
The video is in Spanish and you can see it here.
Today I have released a new application, OllamaChat. This is a lightweight native Ollama client for macOS. It allows to easily use a GUI to chat with any LLM installed on your local (or if you want, remote) Ollama instance.
This is a project I started because I wanted my team at IBM to learn more about how AI works and allow them to easily perform demonstrations of IBM’s Granite models even when an Internet connection is not available or can’t be trusted (which is frequently the case at customer sites or during public events).
OllamaChat doesn’t just support regular or reasoning models. It works also with vision models (just drag and drop an image to start chatting with it). You can also use embedding models to understand how text is converted into vectors.
Finally, OllamaChat also supports MCP. This is actually the reason I built this app because I couldn’t find an easy way to demonstrate Agentic AI on my computer without having to install a lot of bloated software. Right now OllamaChat works with local STDIO MCP servers or remote (TCP/IP) unsecured servers. I plan to support secure servers in the future but I have decided to release this version because it is already very useful at this stage.
You can download OllamaChat here
I was invited by the National Autonomous University of Mexico toggle the keynote speech of their first AI international congress. It is always a pleasure to work with universities and share points of view of where the industry is moving. On this occasion, my talk wasn’t technical, it focused on the global effects we may witness in the close future.
You can watch the conference here.
I have been extremely fortunate to join Informix Software in 1997, back when Data Management was still an emerging discipline and Data Warehousing was in its infancy. That has allowed me to see the evolution of Data Analytics and understand why the structured and unstructured data explosion have created hard to solve challenges that have been addressed by multiple groundbreaking technologies like No-SQL databases and architecture decisions that have allowed us to handle vasts amounts of data.
However, younger architects and IT Specialists do not always grasp the whole picture and therefore do not always understand how we finally got to this point.
That is why I gave a series of seminars to my LA technical team back in early 2024. Eventually I decided that since there wasn’t much information available on this subject (in Spanish), I would record a video about it.
You can watch the video here.