|
| 1 | +# eudsl-python-extras |
| 2 | + |
| 3 | +The missing pieces (as far as boilerplate reduction goes) of the MLIR python bindings. |
| 4 | + |
| 5 | +* [TL;DR](#tl-dr) |
| 6 | +* [5s Intro](#5s-intro) |
| 7 | +* [Install](#install) |
| 8 | +* [Examples/Demo](#examples-demo) |
| 9 | + |
| 10 | +## TL;DR |
| 11 | + |
| 12 | +Full example at [examples/mwe.py](examples/mwe.py) (i.e., go there if you want to copy-paste). |
| 13 | + |
| 14 | +Turn this |
| 15 | + |
| 16 | +```python |
| 17 | +K = 10 |
| 18 | +memref_i64 = T.memref(K, K, T.i64) |
| 19 | + |
| 20 | +@func |
| 21 | +@canonicalize(using=scf) |
| 22 | +def memfoo(A: memref_i64, B: memref_i64, C: memref_i64): |
| 23 | + one = constant(1) |
| 24 | + two = constant(2) |
| 25 | + if one > two: |
| 26 | + three = constant(3) |
| 27 | + else: |
| 28 | + for i in range(0, K): |
| 29 | + for j in range(0, K): |
| 30 | + C[i, j] = A[i, j] * B[i, j] |
| 31 | +``` |
| 32 | + |
| 33 | +into this |
| 34 | + |
| 35 | +```mlir |
| 36 | +func.func @memfoo(%arg0: memref<10x10xi64>, %arg1: memref<10x10xi64>, %arg2: memref<10x10xi64>) { |
| 37 | + %c1_i32 = arith.constant 1 : i32 |
| 38 | + %c2_i32 = arith.constant 2 : i32 |
| 39 | + %0 = arith.cmpi ugt, %c1_i32, %c2_i32 : i32 |
| 40 | + scf.if %0 { |
| 41 | + %c3_i32 = arith.constant 3 : i32 |
| 42 | + } else { |
| 43 | + %c0 = arith.constant 0 : index |
| 44 | + %c10 = arith.constant 10 : index |
| 45 | + %c1 = arith.constant 1 : index |
| 46 | + scf.for %arg3 = %c0 to %c10 step %c1 { |
| 47 | + scf.for %arg4 = %c0 to %c10 step %c1 { |
| 48 | + %1 = memref.load %arg0[%arg3, %arg4] : memref<10x10xi64> |
| 49 | + %2 = memref.load %arg1[%arg3, %arg4] : memref<10x10xi64> |
| 50 | + %3 = arith.muli %1, %2 : i64 |
| 51 | + memref.store %3, %arg2[%arg3, %arg4] : memref<10x10xi64> |
| 52 | + } |
| 53 | + } |
| 54 | + } |
| 55 | + return |
| 56 | +} |
| 57 | +``` |
| 58 | + |
| 59 | +then run it like this |
| 60 | + |
| 61 | +```python |
| 62 | +module = backend.compile( |
| 63 | + ctx.module, |
| 64 | + kernel_name=memfoo.__name__, |
| 65 | + pipeline=Pipeline().bufferize().lower_to_llvm(), |
| 66 | +) |
| 67 | + |
| 68 | +A = np.random.randint(0, 10, (K, K)) |
| 69 | +B = np.random.randint(0, 10, (K, K)) |
| 70 | +C = np.zeros((K, K), dtype=int) |
| 71 | + |
| 72 | +backend.load(module).memfoo(A, B, C) |
| 73 | +assert np.array_equal(A * B, C) |
| 74 | +``` |
| 75 | + |
| 76 | +## 5s Intro |
| 77 | + |
| 78 | +This is **not a Python compiler**, but just a (hopefully) nice way to emit MLIR using python. |
| 79 | + |
| 80 | +The few main features/affordances: |
| 81 | + |
| 82 | +1. `region_op`s (like `@func` above) |
| 83 | + \ |
| 84 | + |
| 85 | + 1. These are decorators around ops (bindings for MLIR operations) that have regions (e.g., [in_parallel](https://github.com/llvm/eudsl/blob/fa4807b17a21a4808cc0a4a8a32e2da57f7e3100/projects/eudsl-python-extras/mlir/extras/dialects/scf.py#L134)). |
| 86 | + They turn decorated functions, by executing them "eagerly", into an instance of such an op, e.g., |
| 87 | + ```python |
| 88 | + @func |
| 89 | + def foo(x: T.i32): |
| 90 | + return |
| 91 | + ``` |
| 92 | + becomes `func.func @foo(%arg0: i32) { }`; if the region carrying op produces a result, the identifier for the python function (`foo`) becomes the corresponding `ir.Value` of the result (if the op doesn't produce a result then the identifier becomes the corresponding `ir.OpView`). |
| 93 | + \ |
| 94 | + \ |
| 95 | + This has been upstreamed to [mlir/python/mlir/extras/meta.py](https://github.com/llvm/llvm-project/blob/24038650d9ca5d66b07d3075afdebe81012ab1f2/mlir/python/mlir/extras/meta.py#L12) |
| 96 | + \ |
| 97 | + |
| 98 | +2. `@canonicalize` (like `@canonicalize(using=scf)` above) |
| 99 | + \ |
| 100 | + |
| 101 | + 1. These are decorators that **rewrite the python AST**. They transform a select few forms (basically only `if`s) into a more "canonical" form, in order to more easily map to MLIR. If that scares you, fear not; they are not essential and all target MLIR can still be mapped to without using them (by using the slightly more verbose `region_op`). |
| 102 | + \ |
| 103 | + \ |
| 104 | + See [mlir.extras.ast.canonicalize](https://github.com/llvm/eudsl/blob/f0914c3b3c0e3ca774575aa6a0fba73e1ebb631f/projects/eudsl-python-extras/mlir/extras/ast/canonicalize.py) for details. |
| 105 | + \ |
| 106 | + |
| 107 | +3. `mlir/extras.types` (like `T.memref(K, K, T.i64)` above) |
| 108 | + \ |
| 109 | + |
| 110 | + 1. These are just convenient wrappers around upstream type constructors. Note, because MLIR types are uniqued to a `ir.Context`, these are all actually functions that return the type. |
| 111 | + \ |
| 112 | + \ |
| 113 | + These have been upstreamed to [mlir/python/mlir/extras/types.py](https://github.com/llvm/llvm-project/blob/52b18b4e82d412a7d755e89591c6ebcc41c257a1/mlir/python/mlir/extras/types.py) |
| 114 | + \ |
| 115 | + |
| 116 | +4. `Pipeline()` |
| 117 | + \ |
| 118 | + |
| 119 | + 1. This is just a (generated) wrapper around available **upstream** passes; it can be used to build pass pipelines (by `str(Pipeline())`). It is mainly convenient with IDEs/editors that will tab-complete the available methods on the `Pipeline` class (which correspond to passes), Note, if your host bindings don't register some upstream passes, then this will generate "illegal" pass pipelines. |
| 120 | + \ |
| 121 | + \ |
| 122 | + See [utils/generate_pass_pipeline.py](https://github.com/llvm/eudsl/blob/f0914c3b3c0e3ca774575aa6a0fba73e1ebb631f/projects/eudsl-python-extras/utils/generate_pass_pipeline.py) for details on generation |
| 123 | + [mlir.extras.runtime.passes](https://github.com/llvm/eudsl/blob/4f599951786aedad96e5943993763dc9c5bfb8cd/projects/eudsl-python-extras/mlir/extras/runtime/passes.py) for the passes themselves. |
| 124 | + \ |
| 125 | + |
| 126 | + |
| 127 | + |
| 128 | + |
| 129 | +Note, also, there are no docs (because ain't no one got time for that) but that shouldn't be a problem because the package is designed such that you can use/reuse only the pieces/parts you want/understand. |
| 130 | +But, open an issue if something isn't clear. |
| 131 | + |
| 132 | + |
| 133 | +## Install |
| 134 | + |
| 135 | +If you want to just get started/play around: |
| 136 | + |
| 137 | +```shell |
| 138 | +$ pip install eudsl-python-extras -f https://llvm.github.io/eudsl |
| 139 | +``` |
| 140 | + |
| 141 | +Alternatively, this [colab notebook](https://drive.google.com/file/d/1NAtf2Yxj_VVnzwn8u_kxtajfVzgbuWhi/view?usp=sharing) (which is the same as [examples/mlir_python_extras.ipynb](examples/mlir_python_extras.ipynb)) has a MWE if you don't want to install anything even. |
| 142 | + |
| 143 | +In reality, this package is meant to work in concert with "host bindings" (some distribution of the actual MLIR Python bindings). |
| 144 | +Practically speaking that means you need to have *some* package installed that includes mlir python bindings. |
| 145 | + |
| 146 | +So that means the second line should be amended to |
| 147 | + |
| 148 | +```shell |
| 149 | +$ EUDSL_PYTHON_EXTRAS_HOST_PACKAGE_PREFIX=<YOUR_HOST_MLIR_PYTHON_PACKAGE_PREFIX> \ |
| 150 | + pip install eudsl-python-extras -f https://llvm.github.io/eudsl |
| 151 | +``` |
| 152 | + |
| 153 | +where `YOUR_HOST_MLIR_PYTHON_PACKAGE_PREFIX` is (as it says) the package prefix for your chosen host bindings. |
| 154 | +**When in doubt about this prefix**, it is everything up until `ir` when you import your bindings, e.g., in `import torch_mlir.ir`, `torch_mlir` is the `HOST_MLIR_PYTHON_PACKAGE_PREFIX` for the torch-mlir bindings. |
| 155 | + |
| 156 | +## Examples/Demo |
| 157 | + |
| 158 | +Check [examples](examples) and [tests](tests) for a plethora of example code. |
0 commit comments