I so want this to be true
Jul. 25th, 2023 02:40 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
"The First Room-Temperature Ambient-Pressure Superconductor" arxiv.org/abs/2307.12008
discussion makes one to be cautiously hopeful: news.ycombinator.com/item?id=36864624
Meanwhile, JuliaCon has started at MIT.
discussion makes one to be cautiously hopeful: news.ycombinator.com/item?id=36864624
Meanwhile, JuliaCon has started at MIT.
no subject
Date: 2023-07-25 06:44 pm (UTC)no subject
Date: 2023-07-25 09:31 pm (UTC)no subject
Date: 2023-07-25 10:30 pm (UTC)no subject
Date: 2023-07-26 09:13 pm (UTC)https://leblon.dreamwidth.org/357640.html
comments in https://www.lesswrong.com/posts/a5wNeyt5yvpEP9BBc/the-first-room-temperature-ambient-pressure-superconductor
no subject
Date: 2023-07-27 03:49 am (UTC)no subject
Date: 2023-07-26 11:49 pm (UTC)no subject
Date: 2023-07-27 12:05 am (UTC)https://pretalx.com/juliacon2023/talk/U9ABUR/
Keynote: Tim Davis
07-26, 09:00–09:45 (US/Eastern), 26-100
Timothy A. Davis, PhD, is a Professor in the Computer Science and Engineering Department at Texas A&M University, and a Fellow of SIAM, ACM, and IEEE. He serves as an associate editor for the ACM Transactions on Mathematical Software. In 2018 he was awarded the Walston Chubb Award for Innovation, by Sigma Xi.
Prof. Davis is a world leader in the creation of innovative algorithms and widely-used software for solving large sparse matrix problems that arise in a vast range of real-world technological and social applications. He is the author of Suitesparse
https://pretalx.com/juliacon2023/talk/HXFG7X/
IsDef.jl: maintainable type inference
07-26, 12:00–12:30 (US/Eastern), 32-141
IsDef.jl provides maintainable type inference in that it
1. Uses code generation where possible to deterministically infer the types of your code
2. Allows you to overload type-inference with your custom implementation
3. If neither code generation works nor a custom type inference rule is given, it falls back to Core.Compiler.return_type, wrapped by some safety nets
In this talk IsDef.jl is presented, along with typical applications and details about the implementation.
I am super happy to announce the release of IsDef.jl. Since years I wanted to be able to check whether some function is defined for my types. Plain applicable was not enough for me, because it just inspects the first layer, and if this is a generic function (like syntactic sugar), it won't give meaningful results. That was the motivation.
It turns out, you need full-fledged type-inference for this, however Julia's default type-inference is not intended for usage in packages. Inference is not guaranteed to be stable, may be indeterministic, and may change on any minor Julia version, which makes it really hard for maintainability. The purpose of Julia's default type-inference is just code-optimization and as such it is an implementation detail to Julia.
Hence, there is a need for another type inference system, or at least a layer around Julia's default inference which makes it maintainable. Welcome to IsDef.
The highlevel interface of IsDef is very simple, consisting of two functions isdef and Out. The usage of these will be explained, along with implementation details and design decisions.
The package version is 0.1.x - suggestions, critics, and improvement ideas about the way inference is made maintainable are very welcome so that IsDef.jl can become the go-to package for maintainable type inference.
Stephan Sahm
Stephan Sahm is founder of the Julia consultancy Jolin.io, and organizer of the Julia User Group Munich Meetup. In his academic days, he certified as Master of Applied Stochastics, Master and Bachelor of Cognitive Science, and Bachelor of Mathematics/Informatics. Since more than 5 years Stephan Sahm works as senior consultant for Data Science and Engineering, now bringing Julia to industry.
Stephan Sahm's top interest are in green computing, functional programming, probabilistic programming, real time analysis, big data, applied machine learning and in general industry applications of Julia.
Aside Julia and sustainable computing, he likes chatting about Philosophy of Mind, Ethics, Consciousness, Artificial Intelligence and other Cognitive Science topics.
This speaker also appears in:
ExprParsers.jl: Object Orientation for Macros
SimpleMatch.jl, NotMacro.jl and ProxyInterfaces.jl
https://pretalx.com/juliacon2023/talk/CSG8NU/
Exploring the State of Machine Learning for Biological Data
07-26, 12:00–12:30 (US/Eastern), 32-124
Exploring the use of Julia, in analyzing biological data. Discussion of libraries and packages, challenges and opportunities of using machine learning on biological data, and examples of past and future applications.
This talk, "Exploring the State of Machine Learning for Biological Data in Julia," will delve into the use of the high-performance programming language, Julia, in analyzing biological data. We will discuss various libraries and packages available in Julia, such as BioJulia and Flux.jl, and the benefits of using Julia for machine learning in the field of biology. Additionally, the challenges and opportunities that arise when using machine learning techniques on biological data, such as dealing with high-dimensional and heterogeneous data, will be addressed. The talk will also include examples of how machine learning has been applied to biological data in the past and what the future holds for this field.
Edmund Miller
PhD Student at UT Dallas in the Functional Genomics Lab
nf-core maintainer
This speaker also appears in:
Unlocking the Power of Genomic Analysis in Julia
Cancelled, but remarkable: https://pretalx.com/juliacon2023/talk/SXKKZE/
Fine-tuning GPT-3 to understand the Julia Ecosystem
07-26, 12:10–12:20 (US/Eastern), 32-123
GPT-3 is a large language model from OpenAI that can do many general-purpose natural language processing (NLP) tasks. However, given that Julia is a smaller ecosystem, GPT-3 often times lacks context about Julia and doesn't produce great results. This talk goes over how to fine-tune GPT-3 with OpenAI.jl.
OpenAI.jl provides an interface to the OpenAI API. This talk will cover how to use the API, how the fine-tuning process works (including how to create a training dataset), and some amazing applications/workflows enabled by giving GPT-3 more context of the Julia ecosystem.
Logan is the Developer Community Advocate for the Julia programming language and is on the Board of Directors at NumFOCUS.
https://github.com/JuliaML/OpenAI.jl
https://pretalx.com/juliacon2023/talk/FGZBBB/
Interpretable Machine Learning with SymbolicRegression.jl
07-26, 14:30–15:00 (US/Eastern), 32-D463 (Star)
This session's header image
SymbolicRegression.jl is a state-of-the-art symbolic regression library written from scratch in Julia using a custom evolutionary algorithm. The software emphasizes high-performance distributed computing, and can find arbitrary symbolic expressions to optimize a user-defined objective – thus offering a very interpretable type of machine learning. SymbolicRegression.jl and its Python frontend PySR have been used for model discovery in over 30 research papers, from astrophysics to economics.
SymbolicRegression.jl is an open-source library for practical symbolic regression, a type of machine learning that discovers human-interpretable symbolic models. SymbolicRegression.jl was developed to democratize and popularize symbolic regression for the sciences, and is built on a high-performance distributed backend, a flexible search algorithm, and interfaces with several deep learning packages. The hand-rolled internal search algorithm is a mixed evolutionary algorithm, which consists of a unique evolve-simplify-optimize loop, designed for optimization of unknown real-valued constants in newly-discovered empirical expressions. The backend is highly optimized, capable of fusing user-defined operators into SIMD kernels at runtime with LoopVectorization.jl, performing automatic differentiation with Zygote.jl, and distributing populations of expressions to thousands of cores across a cluster using ClusterManagers.jl. In describing this software, I will also share a new benchmark, “EmpiricalBench,” to quantify the applicability of symbolic regression algorithms in science. This benchmark measures recovery of historical empirical equations from original and synthetic datasets.
In this talk, I will describe the nuts and bolts of the search algorithm, its efficient evaluation scheme, DynamicExpressions.jl, and how SymbolicRegression.jl may be used in scientific workflows. I will review existing applications of the software (https://astroautomata.com/PySR/papers/). I will also discuss interfaces with other Julia libraries, including SymbolicUtils.jl, as well as SymbolicRegression.jl's PyJulia-enabled link to the ScikitLearn ecosystem in Python.
Miles Cranmer
Miles Cranmer is an Assistant Professor in Data Intensive Science at the University of Cambridge. He received his PhD in Astrophysics from Princeton University and his BSc in Physics from McGill University.
Miles is interested in the automation of scientific research with machine learning. His current focus is the development of interpretable machine learning methods, especially symbolic regression, as well as applying them to multi-scale dynamical systems.
no subject
Date: 2023-07-27 12:05 am (UTC)https://pretalx.com/juliacon2023/talk/L9S3HT/
Neuroblox.jl: biomimetic modeling of neural control circuits
07-26, 15:00–15:30 (US/Eastern), 32-D463 (Star)
Neuroblox.jl is a Julia module designed for computational neuroscience and psychiatry applications. Our tools range from control circuit system identification to brain circuit simulations bridging scales from spiking neurons to fMRI-derived circuits, parameter-fitting models to neuroimaging data, interactions between the brain and other physiological systems, experimental optimization, and scientific machine learning.
Neuroblox.jl is based on a library of modular computational building blocks (“blox”) in the form of systems of symbolic dynamic differential equations that can be combined to describe large-scale brain dynamics. Once a model is built, it can be simulated efficiently and fit electrophysiological and neuroimaging data. Moreover, the circuit behavior of multiple model variants can be investigated to aid in distinguishing between competing hypotheses.
We employ ModelingToolkit.jl to describe the dynamical behavior of blox as symbolic (stochastic/delay) differential equations. Our libraries of modular blox consist of individual neurons (Hodgkin-Huxley, IF, QIF, LIF, etc.), neural mass models (Jansen-Rit, Wilson-Cowan, Lauter-Breakspear, Next Generation, microcanonical circuits etc.) and biomimetically-constrained control circuit elements. A GUI designed to be intuitive to neuroscientists allows researchers to build models that automatically generate high-performance systems of numerical ordinary/stochastic differential equations from which one can run stimulations with parameters fit to experimental data. Our benchmarks show that the increase in speed for simulation often exceeds a factor of 100 as compared to neural mass model implementation by the Virtual Brain (python) and similar packages in MATLAB. For parameter fitting of brain circuit dynamical models, we use Turing.jl to perform probabilistic modeling, including Hamilton-Monte-Carlo sampling and Automated Differentiation Variational Inference.
This talk will demonstrate Neuroblox.jl by building small dynamic brain circuits. We construct circuits by first defining the nodes (“blox”) and then either defining an adjacency matrix, building directed graphs, or sketching the circuit in our GUI to assemble the brain circuit ODE systems. Using symbolic equations through ModelingToolkit.jl allows us to define additional parameters of the system that are varied across the system, which later can be used for parameter fitting of data. We will also demonstrate the simulation of networks of several hundred spiking neurons representing a biomimetic reinforcement learning model. We present visual patterns to these spiking neuron networks and apply a learning algorithm to adjust the weights to classify the different patterns. The reinforcement learning feedback is implemented by frequent callbacks into the ODE system and allows us to monitor the learning rate continuously. In addition, we will show examples of a larger brain circuit generating an emergent systems-level cortico-striatal-thalamo-cortical loop, which selectively responds to learned visual patterns and matches many of the experimental data's dynamic behaviors from non-human primates. Finally, we will show benchmarks comparing Neuroblox.jl to other implementations of brain circuit simulations. We hope to convince the audience that Julia is the ideal language for computational neuroscience applications.
Helmut Strey
https://pretalx.com/juliacon2023/talk/TXKXKP/
An update on the ITensor ecosystem
07-26, 15:30–16:00 (US/Eastern), 32-G449 (Kiva)
ITensor is a library for running and developing tensor network algorithms, a set of algorithms where high order tensors are represented as a network of lower order, and low rank, tensors. I will give an update on the ITensor Julia ecosystem. In particular, I will discuss efforts to support more GPU backends and block sparse operations on GPU, as well as ITensorNetworks.jl, a new library for tensor network algorithms on general graphs.
ITensor is a library for running and developing tensor network algorithms, a set of algorithms where high order tensors are represented as a network of lower order, and low rank, tensors. I will give an update on the ITensor Julia ecosystem. In particular, I will discuss updates to our tensor operation backend library, NDTensors.jl, and efforts to make it more extensible and support dense and block sparse operations on a variety of GPU backends through package extensions. In addition, I will discuss the new ITensorNetworks.jl library, a library built on top of ITensors.jl for tensor network algorithms on general graphs, which has a graph-like interface and graph operations based on Graphs.jl.
Matthew Fishman
Research Scientist and Software Engineer at the Flatiron Institute.
https://pretalx.com/juliacon2023/talk/HUCMKV/
Thoughts for the Next Generation
07-26, 16:00–16:30 (US/Eastern), 32-D463 (Star)
The first version of Modelica was released in 1997. Today, over 25 years later, it is still going strong. At the same time, Julia is taking the numerical computing world by storm and ModelingToolkit is revisiting many of the topics and applications that drove the development of Modelica. This talk will highlight many of the important aspects of Modelica's design in the hope that these are taken to heart by the developers of the next generation of modeling and simulation tools.
Modelica is an interesting mix of basically three different topics. At the lowest level, Modelica is concerned with simulation (solving non-linear equations, integrating differential equations, computing Jacobians). In this way, it is very much aligned with Julia and my sense is that Julia has already exceeded what Modelica tools can offer at this level.
The next level above that is the ability to perform symbolic manipulation on the underlying equations so as to generate very efficient simulation code. The capabilities in this layer are what pushed Modelica forward into applications like real-time hardware-in-the-loop because symbolic manipulation provides enormous benefits here that other tools, lacking symbolic manipulation, could not match. There were some amazing contributions from Pantelides, Elmqvist, Otter, Cellier, and many others in this area and it is important that people understand these advances.
Finally, you have the language itself. While the symbolic manipulation opened many doors to extend what was possible with modeling and simulation, the language that was layered on top addressed very important ideas around usability. It brought in many important ideas from software engineering to make modeling and simulation a scalable enterprise (allowing model developers to create models that had potentially 1000s of components and tens or even hundreds of thousands of equations). But it also brought usability with a representation that quite naturally and seamlessly allowed graphical, schematic based modeling to be implemented consistently by a number of different tools.
None of this is to say that Modelica is perfect or that better tools aren't possible. It is simply to raise awareness of all the things Modelica got right so that future generations can build on that and create even more amazing and capable tools to push the frontiers of modeling and simulation even further.
Michael Tiller
Michael Tiller has a Ph.D. in Mechanical Engineering from the University of Illinois, Urbana-Champaign. He is the Secretary of the Modelica Association, President of the North America Modelica Users' Group, author of two books on Modelica and the CTO of Realis Simulation.
https://pretalx.com/juliacon2023/talk/PX99DC/
Tensor network contraction order optimization algorithms
07-26, 16:15–16:30 (US/Eastern), 32-G449 (Kiva)
In this talk, I will introduce the algorithms used to find optimal
contraction orders for tensor networks, which are implemented in the
OMEinsumContractionOrders.jl package. These algorithms have a wide range of
applications, including simulating quantum circuits, solving inference
problems, and solving combinatorial optimization problems.
In this talk, I will introduce the algorithms used to find optimal
contraction orders for tensor networks, which are implemented in the
OMEinsumContractionOrders.jl package. These algorithms have a wide range of
applications, including simulating quantum circuits, solving inference
problems, and solving combinatorial optimization problems.
JinGuo Liu
https://pretalx.com/juliacon2023/talk/CQEE8M/
Expronicon: a modern toolkit for meta-programming in Julia
07-26, 16:10–16:20 (US/Eastern), 32-141
Expronicon is a toolkit for metaprogramming in Julia, offering a rich set of functions for analyzing, transforming, and generating Julia expressions, first-class support of MLStyle's pattern matching, and type-stable algebra data types for efficient and simple code generation. Perfect for boosting productivity and improving coding efficiency.
Expronicon is a collective toolkit built on top of the awesome MLStyle package for functional programming features. It features
a rich set of tools for analyzing, transforming, and generating Julia expressions
type-stable algebra data type
expanding compile-time dependencies, help you get rid of dependencies that are not needed in runtime to improve your package's loading time
ExproniconLite: a light-weight version with 0 dependencies for latency-sensitive applications
please refer to the documentation website for more information https://expronicon.rogerluo.dev/
Xiu-zhe (Roger) Luo
Graduate student at University of Waterloo and Perimeter Institute. I’m interested in exploring quantum many body physics with machine learning and modern methods of programming. I'm also the Wittek Quantum Prize Winner in 2020 .
One of the creators of QuantumBFS/Yao.jl and many other open source packages in JuliaLang. Contributor of various projects including FluxML/Zygote.jl, FluxML/Flux.jl, PyTorch.
Core member of JuliaCN, the Julia language localization organization for Chinese.
This speaker also appears in:
Yao.jl & Bloqade.jl: towards a symbolic engine for quantum
https://pretalx.com/juliacon2023/talk/RRBDAA/
StochasticAD.jl: Differentiating discrete randomness
07-26, 16:30–17:00 (US/Eastern), 32-D463 (Star)
This session's header image
Automatic differentiation (AD) is great: use gradients to optimize, sample faster, or just for fun! But what about coin flips? Agent-based models? Nope, these aren’t differentiable... or are they? StochasticAD.jl is an open-source research package for AD of stochastic programs, implementing AD algorithms for handling programs that can contain discrete randomness.
StochasticAD.jl is an open-source research package for automatic differentiation (AD) of stochastic programs. The particular focus is on implementing AD algorithms for handling programs that can contain discrete randomness. But what does this even mean?
Derivatives are all about how functions are affected by a tiny change ε in their input. For example, take the function sin(x). Perturb x by ε, and the output changes by approximately cos(x) * ε: tiny change in, tiny change out. And the coefficient cos(x)? That's the derivative!
But what happens if your function is discrete and random? For example, take a Bernoulli variable, with probability p of being 1 and probability 1-p of being 0. If we perturb p by ε, the output of the Bernoulli variable cannot change by a tiny amount. But in the probabilistic world, there is another way to change by a tiny amount on average: jump by a large amount, with tiny probability.
StochasticAD.jl generalizes the well-known concept of dual numbers by including a third component to describe large perturbations with infinitesimal probability. The resulting object is called a stochastic triple, and StochasticAD.jl develops the algorithms to propagate this triple through user-written code involving discrete randomness. Ultimately, the result is a provably unbiased estimate of the derivative of your program, even if it contains discrete randomness!
In this talk, we will discuss the workings of StochasticAD.jl, including the underlying theory and the technical implementation challenges.
Frank Schäfer
I am a postdoc in the Julia Lab at the Massachusetts Institute of Technology (MIT).
Previously: PhD in physics in the Bruder group within the “Quantum Computing and Quantum Technology” PhD school at the University of Basel.
https://frankschae.github.io/
This speaker also appears in:
Differentiation of discontinuities in ODEs arising from dosing
Convex Optimization for Quantum Control in Julia
no subject
Date: 2023-07-27 12:52 am (UTC)https://pretalx.com/juliacon2023/talk/HPT87A/ (not live-streamed/recorded)
The Special Math of Translating Theory To Software in DiffEq
07-27, 11:00–11:30 (US/Eastern), 32-141
Chris Rackauckas: The Special Math of Translating Theory To Software in Differential Equations
The Special Math of Translating Theory To Software in Differential Equations by Chris Rackauckas in 32-141
https://pretalx.com/juliacon2023/talk/3MMRYJ/ (not live-streamed/recorded)
Steven Smith: Say It With Matrices?
07-27, 12:00–12:30 (US/Eastern), 32-141
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
My career and contributions have been greatly influenced by Alan Edelman’s work on random matrices, optimization, scientific computing, along with his cherished collaboration and advice. This talk starts with a brief survey of how Alan and his ideas provide a strong foundation for applied research in important areas: random matrices and optimization are applied extensively in diverse fields from sensor arrays to social media networks. The recent, interwoven developments of networked multimedia content sharing and neural-network-based large language and diffusion models would appear to provide a natural home for this theory, which has a great deal to say about the underlying matrices and algorithms that describe both the data and nonlinear optimization methods used in AI. Yet progress in these AI fields has evolved rapidly and spectacularly almost wholly without explicit insights from matrix theory, in spite of their deep reliance on random matrices. The second part of the talk uses related experience from recent work on MCMC- and LLM-based causal inference of real-world network influence to describe the challenges and potential opportunities of applying matrix theory to these recent developments.
https://pretalx.com/juliacon2023/talk/NLJFAX/
Surrogatizing Dynamic Systems using JuliaSim: An introduction.
07-27, 14:00–14:30 (US/Eastern), 32-D463 (Star)
In this talk, we will discuss the use of surrogates in scientific simulations, and introduce JuliaSim, a commercial offering built on top of the SciML ecosystem, and introduce some of the surrogates available in JuliaSim.
In recent years, the use of surrogates in scientific simulations has become increasingly of interest. Surrogates, also known as digital-twins, are approximate models that are trained to mimic the output of a computationally expensive or complex simulation. They can be used to quickly explore the parameter space of a simulation, tune a controller, or optimize inputs and parameters.
The SciML ecosystem is an open-source project that aims to provide a suite of software tools for scientific modeling in the Julia programming language. It includes a wide range of modeling and simulation tools, including differential equations solvers, optimization algorithms, and surrogate models. The goal of SciML is to make it easy for scientists and engineers to use advanced modeling techniques in their work.
JuliaSim is a commercial offering built on top of the open-source SciML ecosystem. It provides a suite of tools for building and deploying surrogate models in Julia. JuliaSim makes it easy to interface with existing simulation codes and dynamic models and also to train, validate, and deploy surrogates using a wide range of algorithms.
In this talk, we will discuss the use of surrogates in scientific simulations, and introduce JuliaSim and discuss the variety of surrogates available in JuliaSim, including their individual specialties.
Sharan Yalburgi
Sharan is a Research Engineer at JuliaHub working on JuliaSim - a modern SciML powered suite for modeling and simulation.
https://pretalx.com/juliacon2023/talk/MS7SVG/
SciML: Novel Scientific Discoveries through composability
07-27, 14:30–15:00 (US/Eastern), 32-D463 (Star)
SciML provides tools for a wide problem space. It can be confusing for new users to decide between the packages and the kind of questions that can be answered using each of them. This talk will walk through various ecosystem components for tasks such as inverse problems, model augmentation, and equation discovery and showcase workflows for using these packages with examples based on real-world data.
SciML provides tooling for various Scientific Machine Learning tasks, including parameter estimation, model augmentation, equation discovery, ML-based solvers for differential equations, and surrogatization. It can be confusing for new users to reason about the various packages, including DiffEqParamEstim, DiffEqFlux, DataDrivenDiffEq, NeuralPDE, and Surrogates etc., and their suitability for the problem they want to solve. We plan to provide a wide overview of the SciML ecosystem packages, describing the kinds of questions that each of these packages is suitable to answer. Additionally, we will demonstrate sample SciML workflows that show the composability of the ecosystem.
Vaibhav Dixit
Vaibhav is a Software Engineer at JuliaHub where he works on the Pumas Engineering team. He is an active member of the SciML ecosystem with contributions across parameter estimation and global sensitivity.
Utkarsh
Graduate student @ MIT
Torkel
Torkel is a Postdoc at the Julia Lab at MIT. His research is on methods for modelling chemical reaction networks, specialising in how these are affected by noise.
Torkel
Torkel is a postdoc at the JuliaLab at MIT. His research is on methods for modelling (bio)chemical reaction networks, focusing especially on noise. He is a developer of the Catalyst.jl package.
https://pretalx.com/juliacon2023/talk/KPXNR7/
Geometric Algebra at compile-time with SymbolicGA.jl
07-27, 15:00–15:30 (US/Eastern), 32-144
Geometric Algebra is a high-level mathematical framework which expresses a variety of geometric computations with an intuitive language. While its rich structure unlocks deeper insight and an elegant simplicity, it often comes at a cost to numerical implementations. After giving an overview of geometric algebra and its applications, a Julia implementation is presented which uses metaprogramming to shift the work to compile-time, enabling a fast and expressive approach to computational geometry.
Geometric Algebra is a high-level mathematical framework which expresses a large range of geometric computations in a simple and intuitive language. From a single set of rules and axioms, this framework allows you to create diverse and geometrically meaningful spaces which best suit your needs.
Complex numbers and quaternions may be identified as elements in such spaces which describe rotations in two and three dimensions. These spaces may express Euclidean transformations, such as reflections, rotations and translations; others express intersections of flat geometry such as lines and planes, and may include rounded geometry such as circles and spheres in slightly more complex spaces - all in a dimension-agnostic manner.
The price to pay for this unifying, high-level framework is extra mathematical structure that is generally not a zero-cost abstraction. However, by shifting the application of this structure to compile-time, it is possible to combine the expressive power of geometric algebra with highly performant code.
In this talk, pragmatic motivations for considering geometric algebra are provided, with a quick introduction to its formalism. Then, the open-source SymbolicGA.jl package is presented as a compile-time implementation of geometric algebra. It will be shown that the language of geometric algebra can be used to describe many geometric operations, all with a low symbolic complexity and in a performant manner.
Cédric Belmant
Cédric Belmant is an applied mathematician and programmer, with a strong interest in 3D graphics, geometry processing and application development. He believes the expressive power of the Julia programming language is key to building applications and tools with minimal complexity, and has been exploring ways to integrate computer graphics in the Julia ecosystem.
This speaker also appears in:
When type instability matters
Towards developing a production app with Julia
https://pretalx.com/juliacon2023/talk/PK9C77/
SimpleGA. A lightweight Geometric Algebra library.
07-27, 15:30–16:00 (US/Eastern), 32-144
Geometric algebra (GA) is a powerful language for formulating and solving problems in geometry, physics, engineering and graphics. SimpleGA is designed as a straightforward implementation of the most useful geometric algebras, with the key focus on performance. In this talk we use the library to explain some key properties of GA, and explain the motivation behind the design and how it utilises Julia's unique features.
Geometric algebra is a powerful mathematical language that unites many disparate concepts including complex numbers, quaternions, exterior algebra, spinors and projective geometry. The goal with this talk is to use a simple implementation of the algebra to explain the main features. No prior knowledge of geometric (aka Clifford) algebra will be assumed and by the end the audience should have a basic understanding of the properties of the geometric product - the key basis for the algebra. A novel implementation of this product in terms of binary operations will also be discussed. All of the SimpleGA source code is available, and there are many excellent free resources on geometric algebra for those interested in diving deaeper.
Chris Doran
Chris Doran spent the early part of his career researching applications of geometric algebra in quantum theory and gravitation, before switching to graphics. In 2005 he founded Geomerics, which provided real-time global illumination technology to the games industry. Geomerics' Enlighten was used in 100s of games including Dragon Age, Overwatch and Final Fantasy. Chris spent 4 years as a Director of Research at Arm, and now focuses on helping start-ups and university spin-outs. He is currently a Director of Monumo, who are actively using Julia in their research pipeline.
https://pretalx.com/juliacon2023/talk/PYJVRU/
Sound Synthesis with Julia
07-27, 15:30–15:40 (US/Eastern), 32-123
We describe and demonstrate a method to use Julia to generate music on a computer. While electronic music generation has had a long and distinguished history, the use of the Julia programming language provides benefits that are not available using traditional tools in this area.
Most electronic music synthesis software today is written in C/C++. This is usually due to the performance requirements that are necessary in this domain. The use of Julia however brings two distinct advantages to this area.
First, using a high level, dynamic programming language, allows for a wider and more productive range of experimentation. The use of Julia allows for the performance characteristics to me met, while working in an easy to use language. Second, the wide range of high quality mathematical libraries in Julia, from FFT to differential equation solvers, allows for the use of high level constructs, further increasing the productivity of the artist.
In this talk, we show a set of fundamental building blocks for music synthesis in Julia. From wave generators to filters to amplifiers, we will see how these can be built with simple Julia functions, leveraging the existing ecosystem. We will show that Julia's ability to build abstractions without sacrificing performance is crucial to this use case.
Ahan Sengupta
I am a high school student interested in the intersection of programming and music.
Avik Sengupta
Avik Sengupta is the head of product development and software engineering at Julia Computing, contributor to open source Julia and maintainer of several Julia packages. Avik is the author of Julia High Performance, co-founder of two artificial intelligence start-ups in the financial services sector and creator of large complex trading systems for the world's leading investment banks.
https://pretalx.com/juliacon2023/talk/VJRZDF/
Keynote: Stephen Wolfram
07-27, 16:15–17:00 (US/Eastern), 26-100
Dr. Stephen Wolfram is the creator of Mathematica, Wolfram|Alpha and the Wolfram Language; the author of A New Kind of Science; the originator of the Wolfram Physics Project; and the founder and CEO of Wolfram Research.
Over the course of more than four decades, he has been a pioneer in the development and application of computational thinking—and has been responsible for many discoveries, inventions and innovations in science, technology and business. Based on both his practical and theoretical thinking, Dr. Wolfram has emerged as an authority on the implications of computation and artificial intelligence for society and the future, and the importance of computational language as a bridge between the capabilities of computation and human objectives.
Dr. Wolfram has been president and CEO of Wolfram Research since its founding in 1987. In addition to his corporate leadership, Wolfram is deeply involved in the development of the company's technology, personally overseeing the functional design of the company's core products on a daily basis, and constantly introducing new ideas and directions.
no subject
Date: 2023-07-27 06:20 pm (UTC)https://pretalx.com/juliacon2023/talk/RTCDVR/
State of Julia
07-28, 09:00–09:45 (US/Eastern), 26-100
https://pretalx.com/juliacon2023/talk/QN3XGU/
Learning smoothly: machine learning with RobustNeuralNetworks.jl
07-28, 10:00–10:30 (US/Eastern), 26-100
This session's header image
Neural networks are typically sensitive to small input perturbations, leading to unexpected or brittle behaviour. We present RobustNeuralNetworks.jl: a Julia package for neural network models that are constructed to naturally satisfy robustness constraints. We discuss the theory behind our model parameterisation, give an overview of the package, and demonstrate its use in image classification, reinforcement learning, and nonlinear robotic control.
Modern machine learning relies heavily on rapidly training and evaluating neural networks in problems ranging from image classification to robotic control. However, most existing neural network architectures have no robustness certificates, making them sensitive to even small input perturbations and highly susceptible to poor data quality, adversarial attacks, and other forms of input disturbances. The few neural network architectures proposed in recent years that offer solutions to this brittle behaviour rely on explicitly enforcing constraints during training to “smooth” the network response. These methods are computationally expensive, making them slow and difficult to scale up to complex real-world problems.
Recently, we proposed the Recurrent Equilibrium Network (REN) architecture as a computationally efficient solution to these problems. The REN architecture is flexible in that it includes all commonly used neural network models, such as fully-connected networks, convolutional neural networks, and recurrent neural networks. The weight matrices and bias vectors in a REN are directly parameterised to naturally satisfy behavioural constraints chosen by the user. For example, the user can build a REN with a given Lipschitz constant to ensure the output of the network is quantifiably less sensitive to unexpected input perturbations. Other common options include contracting RENs and input/output passive RENs.
The direct parameterisation of RENs means that no additional constrained optimization methods are needed to train the networks to be less sensitive to attacks or perturbations. We can therefore train RENs with standard, unconstrained optimization methods (such as gradient descent) while also guaranteeing their robustness. Achieving the “best of both worlds” in this way is unique to our REN model class, and allows us to freely train RENs for common machine learning problems as well as more difficult applications where safety and robustness are critical.
In this talk, we will present our RobustNeuralNetworks.jl package. The package is built around the AbstractREN type, encoding the REN model class. It relies heavily on key features of the Julia language (such as multiple dispatch) for a neat, efficient implementation of RENs, and can be used alongside Flux.jl to solve machine learning problems with and without robustness requirements, all in native Julia.
We will give a brief introduction to the fundamental theory behind our direct parameterisation of neural networks, and outline what we mean by nonlinear robustness. We will follow this with a detailed overview of the RobustNeuralNetworks.jl package structure, including the key types and methods used to construct and implement a REN. To conclude, we will demonstrate some interesting applications of our Julia package for REN in our own research, including in:
Image classification
System identification
Learning-based control for dynamical systems
Real-time control of robotic systems via the Julia C API
Ultimately, we hope to show how RENs will be useful to the wider Julia machine learning community in both research and industry applications. For more information on the REN model class and its uses, please see our two recent papers https://arxiv.org/abs/2104.05942 and https://doi.org/10.1109/LCSYS.2022.3184847.
Nicholas Barbara
Nicholas Barbara is a PhD candidate at the Australian Centre for Robotics, within the University of Sydney. He is interested in robust machine learning, control theory, spacecraft GNC, and all things Julia.
https://pretalx.com/juliacon2023/talk/FVZXUF/ (not livestreamed/not clear if recording will be published/not sure what to do about this; I am very interested in sparseness, but it's inconvenient timing)
Sparsity: Practice-to-Theory-to-Practice
07-28, 11:00–11:25 (US/Eastern), 32-141
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
As we all know, the entire world of computation is mostly matrix multiplies. Within this universe we do allow some variation. Specifically, all the world is mostly either dense matrix multiplies or sparse matrix multiplies. Sparse matrices are often used as a trick to solve larger problems by only storing non-zero values. As a result, there is large toolkit of powerful sparse matrix software. The availability of sparse matrix tools inspires representing a wide range of problems as sparse matrices. Notably graphs have many wonderful sparse matrix properties and many graph algorithms can be written as matrix multiplies using a variety of semirings. This inspires developing new sparse matrix software that encompasses a wide range of semiring operations. In the context of graphs, where vertex labels are diverse, it is natural to relax strict dimension constraints and make hyper-sparse matrices a full-fledged member of the sparse matrix software world. The wide availability of hyper-sparse matrices allows addressing a wide range of problems and completely new approaches to parallel computing.
https://pretalx.com/juliacon2023/talk/9ES8NF/
Falra.jl : Distributed Computing with AI Source Code Generation
07-28, 12:20–12:30 (US/Eastern), 32-124
Falra.jl in Julia provides a straightforward approach to implementing distributed computing, equipped with an AI-assisted feature for generating source code. This addition facilitates more efficient big data transformations. Tasks such as preprocessing 16TB of IoT data can be done in 1/100 of the original time. Developers are now able to generate Julia source code more easily with the aid of AI, further aiding in distributed computing tasks.
This is a real development scenario that we encountered to preprocess 6-year, 16TB historical IoT raw datasets for data cleaning and transformation. It takes 100 days to complete processing in a single-machine environment, which is time-consuming.
So, the Falra.jl was developed to allow us to divide the data cleaning and transformation tasks that we need to perform into smaller tasks. Falra.jl then automatically distributes these tasks for distributed processing. This architecture saves a lot of computing time and development costs. Through Falra.jl, we were able to complete all IoT data transformations in 1/100 of the time.
Compared to the native Julia distributed module, the advantage of Falra.jl is that developers do not need to learn how to develop the Julia distributed programming syntax. They can just use their single-machine programs as they used to do. In addition, Falra.jl can be deployed on any network that can be called via HTTPS. There is no need to deal with TCP or other network or firewall issues.
Moreover, we've enhanced our approach by integrating AI-assisted Julia source code auto-generation. This novel feature allows developers to efficiently create Julia code using artificial intelligence. Rather than manually crafting each line of code, the AI
can generate source code based on the developer's requirements, thus accelerating the development process. It makes it feasible for developers, even those unfamiliar with Julia, to quickly produce distributed programs. This AI-driven tool not only simplifies code creation but also enables the rapid adaptation and extension of the
applications under the Falra.jl . The fusion of distributed computing and AI-assisted auto-generation of Julia source code significantly boosts productivity.
Currently, we have released the Falra.jl on Github (https://github.com/bohachu/Falra.jl) for everyone to use.
Bowen Chiu
With 33 years of experience in software programming, Bowen is the founder of CAMEO Corporation. He specializes in artificial intelligence and distributed computing, with a
particular focus on the environmental sector, the educational sector, and start-ups.
no subject
Date: 2023-07-27 06:36 pm (UTC)https://pretalx.com/juliacon2023/talk/WRHJPD/
ExprParsers.jl: Object Orientation for Macros
07-28, 14:00–14:30 (US/Eastern), 26-100
You want to build a complex macro? ExprParsers.jl gives you many prebuilt expression parsers - for functions, calls, args, wheres, macros, ... - so that you don't need to care about the different ways these high-level Expr-types can be represented in Julia syntax. Everything is well typed, so that you can use familiar julia multiple dispatch to extract the needed information from your input Expr.
The need of abstracting upon Expr-types like functions is already recognized by the widespread MacroTools.jl. There you have support for functions (and arguments) by a set of helpers like splitdef and combinedef which go from Expr to Dict and back.
ExprParsers.jl is different from MacroTools.jl in that it 100% focuses on this kind of object-orientation, extended to many more Expr-types like where syntax, type annotations, keyword arg, etc. In addition, ExprParsers are well typed, composable and extendable in that you can easily write your own parser object.
When working with ExprParsers, you first construct your (possibly nested) parser, describing in detail what you expect as the input. Then you safely parse given expressions and dispatch on the precise ExprParser types. Finally, you can mutate the parsed results and return the manipulated version, or simply extract information from it.
Stephan Sahm
Stephan Sahm is founder of the Julia consultancy Jolin.io, and organizer of the Julia User Group Munich Meetup. In his academic days, he certified as Master of Applied Stochastics, Master and Bachelor of Cognitive Science, and Bachelor of Mathematics/Informatics. Since more than 5 years Stephan Sahm works as senior consultant for Data Science and Engineering, now bringing Julia to industry.
Stephan Sahm's top interest are in green computing, functional programming, probabilistic programming, real time analysis, big data, applied machine learning and in general industry applications of Julia.
Aside Julia and sustainable computing, he likes chatting about Philosophy of Mind, Ethics, Consciousness, Artificial Intelligence and other Cognitive Science topics.
This speaker also appears in:
IsDef.jl: maintainable type inference
SimpleMatch.jl, NotMacro.jl and ProxyInterfaces.jl
https://pretalx.com/juliacon2023/talk/BFQVMX/
REPL Without a Pause: Bringing VimBindings.jl to the Julia REPL
07-28, 14:30–15:00 (US/Eastern), 26-100
VimBindings.jl is a Julia package that emulates vim, the popular text editor, directly in the Julia REPL. This talk will illuminate the context in which a REPL-hacking package runs by taking a deep dive into the Julia REPL code, and articulate the modifications VimBindings.jl makes to introduce novel functionality. The talk will also describe design problems that emerge at the intersection of the REPL and vim paradigms, and the choices made to attempt a coherent fusion of the two.
Vim is a ubiquitous text editor found on almost every modern operating system. Vim (and its predecessor vi) has a storied history as a primary contender in the “editor wars”, its modal editing paradigm often pinned against the modeless, extensibility-oriented Emacs.
Vim users often tout its speed and ease of use, at least after stomaching a steep learning curve. Once a user has learned vim they might question why their fingers should leave home-row, even when they aren’t using vim. Their muscle memory can be applied across many applications by using vim emulation plugins or packages: browsers (vimium and vim vixen), email clients (mutt), IDE plugins (vscode-neovim for vs-code, ideavim for IntelliJ), and shell modes (zsh, bash, fish). Vim emulation can even be used to interact with an operating system: sway for Linux users, AppGrid for MacOS users, or evil mode for Emacs users.
Finally, users can use vim emulation in the Julia REPL. In this talk I will describe how VimBindings.jl works, as well as the design considerations borrowed from other vim emulation implementations in its development. I will take a deep dive into the Julia REPL code and describe how the package introduces new functionality to the REPL, I will also discuss the unique challenges faced during the creation of VimBindings.jl, and the not-so-elegant solutions developed to solve them.
Github repo: https://github.com/caleb-allen/VimBindings.jl
Caleb Allen
Caleb Allen is a software engineer and the author of VimBindings.jl, a package that brings the power and elegance of Vim to the Julia REPL. He has worked in various startups, developing applications and systems in languages such as Java, Kotlin, and Python, among others. He has a passion for building tools and infrastructure that make software development more enjoyable and productive. He also enjoys learning new programming languages as a hobby, and he discovered Julia in 2020 during the pandemic. Since then, he has been fascinated by Julia's features and performance, and has enjoyed learning and contributing to the Julia ecosystem. He is excited to share his experience and insights developing VimBindings.jl with the Julia community at JuliaCon.
3pm slot is particularly tricky
3:30 though is clear:
https://pretalx.com/juliacon2023/talk/M8PLZV/
Machine Learning on Server Side with Julia and WASM
07-28, 15:30–16:00 (US/Eastern), 32-124
Julia is a high-performance programming language that has gained traction in the machine-learning community due to its simplicity and speed. The talk looks at how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI). The talk will go over the benefits of using WASM and WASI for building such as improved performance and security
As the demand for machine learning applications grows, so does the need for efficient and performant solutions. Julia is a high-performance programming language that has gained traction in the machine learning community due to its simplicity and speed. We will look at how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI). We will go over the benefits of using WASM and WASI for deployment, such as improved performance and security. In addition, we will demonstrate how to run Julia code on a WASM virtual machine and use WASI to interact with the underlying operating system. Attendees will have a better understanding of the subject by the end of this talk.
Table of Content:
1. Introduction to server side machine learning
2. How can Julia be used for machine learning
3. What is WebAssembly (WASM) and the WebAssembly System Interface (WASI)
4. how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface
5. Demonstration
Shivay Lamba
Shivay Lamba is a software developer specializing in DevOps, Machine Learning and Full Stack Development.
He is an Open Source Enthusiast and has been part of various programs like Google Code In and Google Summer of Code as a Mentor.
He is actively involved in community work as well. He is a TensorflowJS SIG member, Mentor in OpenMined and CNCF Service Mesh Community and has given talks at various conferences like Github Satellite, Voice Global, Fossasia Tech Summit, TensorflowJS Show & Tell.
https://pretalx.com/juliacon2023/talk/YFN8CY/
Automatic Differentiation for Statistical and Topological Losses
07-28, 16:00–16:30 (US/Eastern), 32-124
This session's header image
We present a new Julia library, TDAOpt.jl, which provides a unified framework for automatic differentiation and gradient-based optimization of statistical and topological losses using persistent homology. TDAOpt.jl is designed to be efficient and easy to use as well as highly flexible and modular. This allows users to easily incorporate topological regularization into machine learning models in order to optimize shapes, encode domain-specific knowledge, and improve model interpretability
Persistent homology is a mathematical framework for studying topological features of data, such as connected components, loops, and voids. It has a wide range of applications, including data analysis, computer vision, and shape optimization. However, the use of persistent homology in optimization and machine learning has been limited by the difficulty of computing derivatives of topological quantities.
In our presentation, we will introduce the basics of persistent homology and demonstrate how to use our library to optimize statistical and topological losses in a variety of settings, including shape optimization of point clouds and generative models. We will also discuss the benefits of using Julia for this type of work and how our library fits into the broader Julia ecosystem.
We believe it will be of interest to a wide range of practitioners, including machine learning researchers and practitioners, as well as those working in fields related to topology and scientific computing.
Siddharth Vishwanath
https://pretalx.com/juliacon2023/talk/LENGPQ/ (not live-streamed/recorded)
So you think you know how to take derivatives?
07-28, 16:30–17:00 (US/Eastern), 32-141
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
Derivatives are seen as the "easy" part of learning calculus: a few simple rules, and every function's derivatives are at your fingertips! But these basic techniques can turn bewildering if you are faced with much more complicated functions like a matrix determinant (what is a derivative "with respect to a matrix" anyway?), the solution of a differential equation, or a huge engineering calculation like a fluid simulation or a neural-network model. And needing such derivatives is increasingly common thanks to the growing prevalence of machine learning, large-scale optimization, and many other problems demanding sensitivity analysis of complex calculations. Although many techniques for generalizing and applying derivatives are known, that knowledge is currently scattered across a diverse literature, and requires students to put aside their memorized rules and re-learn what a derivative really is: linearization. In 2022 and 2023, Alan and I put together a one-month, 16-hour "Matrix Calculus" course at MIT that refocuses differential calculus on the linear algebra at its heart, and we hope to remind you that derivatives are not a subject that is "done" after your second semester of calculus.
https://pretalx.com/juliacon2023/talk/N3RRSG/
Closing Ceremony
07-28, 17:00–17:30 (US/Eastern), 26-100
As JuliaCon 2023 comes to a close, join us for a memorable farewell ceremony to celebrate a week of learning, collaboration, and innovation. We'll recap the highlights of the conference, thank our sponsors and volunteers, and recognize outstanding contributions to the Julia community. Don't miss this opportunity to say goodbye to old and new friends, and leave with inspiration for your next Julia project. Safe travels!
(And then hacking option on Friday at Kiva 5:30-11pm; and then at Kiva and Star on Saturday 10am-4pm or so)
no subject
Date: 2023-07-28 12:25 pm (UTC)Jeremy Kepner: Sparsity: Practice-to-Theory-to-Practice
This is a very nice Google search: Jeremy Kepner Sparsity: Practice-to-Theory-to-Practice
This is also a nice search: hyper-sparse
E.g. "Hypersparse Network Flow Analysis of Packets with GraphBLAS", https://arxiv.org/abs/2209.05725 (Jeremy Kepner is the last author)