Skip to content
View jonathanbaraldi's full-sized avatar

Block or report jonathanbaraldi

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
jonathanbaraldi/README.md

Research Focus

I am a Research Engineer bridging the gap between theoretical AI safety and scalable infrastructure. My work focuses on Knowledge Provenance—tracing the specific training data responsible for model behaviors during inference.

I build "Glass Box" tooling to make Large Language Models auditable by design.

🔬 Current Research & Methodologies

  • Knowledge Provenance Maps: A framework for tracking gradient influence at the document level during the training loop.
  • Mechanistic Interpretability: Developing 3D visualization tools to map "knowledge anatomy" across transformer layers.
  • Model Health Metrics: Author of the Sparsity, Concentration, and Utilization metrics for diagnosing fine-tuning efficacy.

📜 Selected Publications & Pre-prints

🛠️ Open Source Engineering

  • knowledge-provenance-suite: A PyTorch-based library for tracking training data influence and visualizing 3D knowledge maps. (Formerly "Transparent AI Suite").
  • granular-gradient-tracker: Memory-efficient implementation of per-sample gradient tracking for LLMs.

📫 Contact

  • Collaboration: I am open to research collaborations on interpretability and model steering.
  • Email: [email protected]

@jonathanbaraldi's activity is private