Greetings earth invaders and network raiders! It has been a spicy hot New York minute since we did an update. SSG Has been hard at work on some new things going into the new year. At the forefront of the year I told the SSG Team:
“Last year was great. We saw a good bit of progress. This year I want to lean on the groups and networks we are already a part of. Rapid expansion is good but we need to support our 0days. Pun fully intended.” – Alex, sometime within the first week of 2024.
In todays blog. I will be focusing in on the AI side of things. It has been a while. There have been a lot of twists and turns in the plot of getting Navi to where it is right now. As of the last major update we had moved over to the Rasa framework. Then we moved to GPT with the intention of building our own model from the ground up using PyTorch. That is well and good but there was already a project out there that offered the best of a few worlds that we are actively working with right now.
Ollama…
One of the SSG core team members brought ollama to my attention sometime in the last two weeks as of the time of writing and I have been off to the side playing with it. The benefits of ollama far outweighed the notion of building our own from 0. One of which was the fact that the framework is build using PyTorch. This means while we are serving Navi using the Llama2-uncensored model it would be easy enough to get our hands dirty building a custom model using this as the backbone. Scraping out the need to generate simple linguistic data to train on and allowing us to focus on building our model for the cybersecurity industry as intended. This will be including its already fairly comprehensive set of data on the topic. The real benefit is that it is self hosted which opens up a plethora of possibility above all it can be local, or networked but at the end of the day its all in house now.
Integrations, lots of integrations
All of this being said. I will admit I went a little nuts-o building out new ways of communicating with Navi. Namely with a discord bot and going as far as adding it to my phone using Enchanted LLM. (Apple only for the time being) As of present if you are on windows and you want to test out Navi you can do so either on the SSG discord or over on the CSI Linux discord. So we gave users an EASY way to talk to Navi that did not require an external installation. I nearly forgot to update the core Navi program. So I set about doing that.
As of the pre-release currently available on github on the edge branch Navi is connected to our in house server to dish up query responses using the above mentioned model.
People actually started to use it.
There is always a certain level of pride I feel when I hear the fans on the Navi server spin up indicating that Navi is thinking. I think in the last four days since the time of writing there have been some one thousand queries to the server asking it a whole slew of things from the mundane all the way to people trying to get RCE through the discord bot. Let me tell you… the first night it was live. The core team nearly had a heart attack when one of us had the idea to ask Navi to execute commands and send the output back through discord. It did so in such a realistic way that it had us going for close to a half hour thinking it was actually capable of this.
Thankfully on further research this was not the case. Crisis averted. Here are some of the screenshots from that
Well that is nifty.
I later loaded Navi into Enchanted LLM on my phone and had a full scale discussion with it spanning multiple topics from congenital heart defects, Rampancy in AI and lastly the meaning of friendship. That is cool. I know… But what was really cool was Navi’s ability to recall bits and bobs from the conversation. Two topics in I had asked it “What was the name of the first heart defect I had asked you about.” not only did it name it but it reminded me of some of the key details.
Granted its not 100% accurate all of the time and this is to be expected. Such as in the example below regarding the repair process for Truncus Arteriosis Type 2.
That is cool… What about the core program?
Fear not the core program has been updated as well to also use the new Ollama setup. It is also working fluidly with Ubuntu 22.04 and the latest version of CSI Linux. I also as a rather useful joke included the ability to @Navi from the terminal as you would on discord.
Right now it is pinging the server itself for queries however I am hoping by early to mid April we will be able to choose between a local instance of Navi and the server instance. With follow up updates enabling users to dictate a server they themselves might be hosting from without having to dig through our code to get to it. (Navi Customizer?)
Wrapping up.
So, 2024 has been shaping up to be an awesome year for SSG and this is only one side of the dice. As always if you want to keep up to date with what we have going on. Links will be below: