Right on cue, Meta has shared its latest AI drop with the world, and this time, the company is letting anybody get their hands on a bot that will write, debug, and describe code in a multitude of coding languages.
As reports last week first hinted, Meta’s Code Llama is a full-on code-generating AI that uses natural language to create and describe code in multiple coding languages. Like most of the AI products Meta’s released as of late, the model is open source and is free to use both personally and commercially.
Advertisement
Meta tried to imply it wouldn’t completely replace programmers, instead calling it a tool to “make workflows faster” and “lower the barrier to entry for people who are learning to code.” The program can both create and debug code, it also can comprehend and provide text explanations for questions regarding different programming languages.
It supports languages including C++, Java, PHP, Typescript, Bash, and C#. There is also a specialized version of the model called Code Llama – Python, which is custom-designed for programming in what’s become one of the most commonplace coding languages.
Advertisement
Advertisement
There’s also The Code Llama – Instruct model which is better at comprehending natural language instructions. Meta said those looking to generate code should use the Instruct model since it’s “fine-tuned to generate helpful and safe answers in natural language.” That emphasis on safety is interesting, as previous coding bots have had mixed results creating workable code, and that’s not even mentioning researchers who have proved other bots like ChatGPT and Bard have been manipulated to create malicious code. Meta does have an acceptable use policy for its AI about generating malicious code, malware, or computer viruses.
As for the dangers of using the AI to produce harmful content, Meta said it red-teamed the program in an attempt to force it to produce malicious code and found “Code Llama answered with safer responses” compared to ChatGPT running on GPT-3.5 Turbo.
The model is based on the framework of Meta’s Llama 2 language model. The company said it further trained the LLM on “code-specific datasets.” According to Meta’s blog post, the model accepts both code and language prompts. Users are also able to tell the model more about its existing codebase, which should turn out more personalized responses.
Code Llama’s three different versions of the AI scaled with more parameters, with 7 billion, 13 billion, and 34 billion parameter versions available. Parameters are usually a marker for an AI’s overall capabilities of producing accurate results. The smaller the model, the more easily it can run on single GPUs. Meta also mentioned that the smaller models are faster and may be better for “real-time code completion.”
Advertisement
According to the blog post, the Code Llama 34B parameter version scored similarly to OpenAI’s GPT-3.5 on several tests like HumanEval that evaluate the capabilities of LLMs. The AI was far below what GPT-4 could do on HumanEval, but it did better than some other open source coding-centric models like Palm-Coder and StarCoder.
The model is akin to the Microsoft-owned GitHub Copilot and Amazon’s CodeWhisperer, though Copilot costs money after a 30-day trial and CodeWhisperer is only free for individual use. These kinds of models are reportedly popular among programmers, with Microsoft claiming that 92% of programmers at large companies are using AI to some extent.
Advertisement
It’s not all gravy. Meta is specifically stopping short of saying what’s in Llama 2’s training data, and for good reason. Some developers have already sued Microsoft and GitHub alleging the company trained the AI on their code, ignoring their licenses.
Services Marketplace – Listings, Bookings & Reviews