Saturday 17 August 2024

Mojo

A new AI-oriented language is on the scene: Mojo, which aims to become a superset of Python. It follows Python semantics but, thanks to recent and advanced compilation techniques, might achieve speedups as large as 100,000 times or more compared to standard Python.

Mojo was created by Chris Lattner, the creator of LLVM, Clang, Swift, Swift for TensorFlow and MLIR.

Currently, Mojo is only supported on Linux and macOS.

The source code is hosted at GitHub, from where you can clone the repository locally. Within the downloaded repository, the 'examples/notebooks' directory contains Jupyter Notebooks that you can run using Mojo (provided that Mojo and the Jupyter plugin are installed).

A very brief introduction to the Mojo language is provided in HelloMojo.ipynb. Another notebook, Matmul.ipynb, implements a matrix multiplication example in both Python and Mojo. The example showcases how optimization in Mojo can lead to a remarkable speedup, reportedly around 455,127x compared to Python. It's important to note that achieving such optimized Mojo code requires considerable effort and a solid understanding of Mojo.

That said, I tried running the code in that notebook on my old HP laptop, which lacks a GPU, and has the following specifications:

inxi -Fxxxzr

System:
Kernel: 5.15.0-118-generic x86_64 bits: 64 compiler: gcc v: 11.4.0
Desktop: Xfce 4.18.1 tk: Gtk 3.24.33 info: xfce4-panel wm: xfwm 4.18.0
vt: 7 dm: LightDM 1.30.0 Distro: Linux Mint 21.3 Virginia
base: Ubuntu 22.04 jammy
Machine:
Type: Laptop System: HP product: HP Laptop 15-bs0xx v: Type1ProductConfigId
serial: <superuser required> Chassis: type: 10 serial: <superuser required>
Mobo: HP model: 832B v: 23.37 serial: <superuser required> UEFI: Insyde
v: F.21 date: 07/04/2017
CPU:
Info: dual core model: Intel Core i5-7200U bits: 64 type: MT MCP
smt: enabled arch: Amber/Kaby Lake note: check rev: 9 cache: L1: 128 KiB
L2: 512 KiB L3: 3 MiB
Speed (MHz): avg: 1134 high: 1245 min/max: 400/3100 cores: 1: 1040
2: 1097 3: 1245 4: 1157 bogomips: 21599

Even though the speedup I achieved was an order of magnitude lower than the original 455,127x reported in the notebook, it was still impressive: 10,032x! It’s likely that using a newer machine, possibly equipped with a GPU, I could achieve speedups on the order of 100,000x.

As reported, tools like Cython or Numba achieve speedups typically in the range of 10-100x

Moreover, the Mojo code ran without any issue on my old laptop running Linux Mint.

So Mojo is a very interesting language, even in its infancy.

Other AI-oriented languages to explore, as described in the insightful post by James Thomason in VentureBeat, "Mojo Rising: The resurgence of AI-first programming languages" include Bend and JAX.

 

Note: the text was checked in ChatGPT.

 

 

No comments: