I so want this to be true
Jul. 25th, 2023 02:40 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
"The First Room-Temperature Ambient-Pressure Superconductor" arxiv.org/abs/2307.12008
discussion makes one to be cautiously hopeful: news.ycombinator.com/item?id=36864624
Meanwhile, JuliaCon has started at MIT.
discussion makes one to be cautiously hopeful: news.ycombinator.com/item?id=36864624
Meanwhile, JuliaCon has started at MIT.
no subject
Date: 2023-07-27 06:20 pm (UTC)https://pretalx.com/juliacon2023/talk/RTCDVR/
State of Julia
07-28, 09:00–09:45 (US/Eastern), 26-100
https://pretalx.com/juliacon2023/talk/QN3XGU/
Learning smoothly: machine learning with RobustNeuralNetworks.jl
07-28, 10:00–10:30 (US/Eastern), 26-100
This session's header image
Neural networks are typically sensitive to small input perturbations, leading to unexpected or brittle behaviour. We present RobustNeuralNetworks.jl: a Julia package for neural network models that are constructed to naturally satisfy robustness constraints. We discuss the theory behind our model parameterisation, give an overview of the package, and demonstrate its use in image classification, reinforcement learning, and nonlinear robotic control.
Modern machine learning relies heavily on rapidly training and evaluating neural networks in problems ranging from image classification to robotic control. However, most existing neural network architectures have no robustness certificates, making them sensitive to even small input perturbations and highly susceptible to poor data quality, adversarial attacks, and other forms of input disturbances. The few neural network architectures proposed in recent years that offer solutions to this brittle behaviour rely on explicitly enforcing constraints during training to “smooth” the network response. These methods are computationally expensive, making them slow and difficult to scale up to complex real-world problems.
Recently, we proposed the Recurrent Equilibrium Network (REN) architecture as a computationally efficient solution to these problems. The REN architecture is flexible in that it includes all commonly used neural network models, such as fully-connected networks, convolutional neural networks, and recurrent neural networks. The weight matrices and bias vectors in a REN are directly parameterised to naturally satisfy behavioural constraints chosen by the user. For example, the user can build a REN with a given Lipschitz constant to ensure the output of the network is quantifiably less sensitive to unexpected input perturbations. Other common options include contracting RENs and input/output passive RENs.
The direct parameterisation of RENs means that no additional constrained optimization methods are needed to train the networks to be less sensitive to attacks or perturbations. We can therefore train RENs with standard, unconstrained optimization methods (such as gradient descent) while also guaranteeing their robustness. Achieving the “best of both worlds” in this way is unique to our REN model class, and allows us to freely train RENs for common machine learning problems as well as more difficult applications where safety and robustness are critical.
In this talk, we will present our RobustNeuralNetworks.jl package. The package is built around the AbstractREN type, encoding the REN model class. It relies heavily on key features of the Julia language (such as multiple dispatch) for a neat, efficient implementation of RENs, and can be used alongside Flux.jl to solve machine learning problems with and without robustness requirements, all in native Julia.
We will give a brief introduction to the fundamental theory behind our direct parameterisation of neural networks, and outline what we mean by nonlinear robustness. We will follow this with a detailed overview of the RobustNeuralNetworks.jl package structure, including the key types and methods used to construct and implement a REN. To conclude, we will demonstrate some interesting applications of our Julia package for REN in our own research, including in:
Image classification
System identification
Learning-based control for dynamical systems
Real-time control of robotic systems via the Julia C API
Ultimately, we hope to show how RENs will be useful to the wider Julia machine learning community in both research and industry applications. For more information on the REN model class and its uses, please see our two recent papers https://arxiv.org/abs/2104.05942 and https://doi.org/10.1109/LCSYS.2022.3184847.
Nicholas Barbara
Nicholas Barbara is a PhD candidate at the Australian Centre for Robotics, within the University of Sydney. He is interested in robust machine learning, control theory, spacecraft GNC, and all things Julia.
https://pretalx.com/juliacon2023/talk/FVZXUF/ (not livestreamed/not clear if recording will be published/not sure what to do about this; I am very interested in sparseness, but it's inconvenient timing)
Sparsity: Practice-to-Theory-to-Practice
07-28, 11:00–11:25 (US/Eastern), 32-141
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
As we all know, the entire world of computation is mostly matrix multiplies. Within this universe we do allow some variation. Specifically, all the world is mostly either dense matrix multiplies or sparse matrix multiplies. Sparse matrices are often used as a trick to solve larger problems by only storing non-zero values. As a result, there is large toolkit of powerful sparse matrix software. The availability of sparse matrix tools inspires representing a wide range of problems as sparse matrices. Notably graphs have many wonderful sparse matrix properties and many graph algorithms can be written as matrix multiplies using a variety of semirings. This inspires developing new sparse matrix software that encompasses a wide range of semiring operations. In the context of graphs, where vertex labels are diverse, it is natural to relax strict dimension constraints and make hyper-sparse matrices a full-fledged member of the sparse matrix software world. The wide availability of hyper-sparse matrices allows addressing a wide range of problems and completely new approaches to parallel computing.
https://pretalx.com/juliacon2023/talk/9ES8NF/
Falra.jl : Distributed Computing with AI Source Code Generation
07-28, 12:20–12:30 (US/Eastern), 32-124
Falra.jl in Julia provides a straightforward approach to implementing distributed computing, equipped with an AI-assisted feature for generating source code. This addition facilitates more efficient big data transformations. Tasks such as preprocessing 16TB of IoT data can be done in 1/100 of the original time. Developers are now able to generate Julia source code more easily with the aid of AI, further aiding in distributed computing tasks.
This is a real development scenario that we encountered to preprocess 6-year, 16TB historical IoT raw datasets for data cleaning and transformation. It takes 100 days to complete processing in a single-machine environment, which is time-consuming.
So, the Falra.jl was developed to allow us to divide the data cleaning and transformation tasks that we need to perform into smaller tasks. Falra.jl then automatically distributes these tasks for distributed processing. This architecture saves a lot of computing time and development costs. Through Falra.jl, we were able to complete all IoT data transformations in 1/100 of the time.
Compared to the native Julia distributed module, the advantage of Falra.jl is that developers do not need to learn how to develop the Julia distributed programming syntax. They can just use their single-machine programs as they used to do. In addition, Falra.jl can be deployed on any network that can be called via HTTPS. There is no need to deal with TCP or other network or firewall issues.
Moreover, we've enhanced our approach by integrating AI-assisted Julia source code auto-generation. This novel feature allows developers to efficiently create Julia code using artificial intelligence. Rather than manually crafting each line of code, the AI
can generate source code based on the developer's requirements, thus accelerating the development process. It makes it feasible for developers, even those unfamiliar with Julia, to quickly produce distributed programs. This AI-driven tool not only simplifies code creation but also enables the rapid adaptation and extension of the
applications under the Falra.jl . The fusion of distributed computing and AI-assisted auto-generation of Julia source code significantly boosts productivity.
Currently, we have released the Falra.jl on Github (https://github.com/bohachu/Falra.jl) for everyone to use.
Bowen Chiu
With 33 years of experience in software programming, Bowen is the founder of CAMEO Corporation. He specializes in artificial intelligence and distributed computing, with a
particular focus on the environmental sector, the educational sector, and start-ups.
no subject
Date: 2023-07-27 06:36 pm (UTC)https://pretalx.com/juliacon2023/talk/WRHJPD/
ExprParsers.jl: Object Orientation for Macros
07-28, 14:00–14:30 (US/Eastern), 26-100
You want to build a complex macro? ExprParsers.jl gives you many prebuilt expression parsers - for functions, calls, args, wheres, macros, ... - so that you don't need to care about the different ways these high-level Expr-types can be represented in Julia syntax. Everything is well typed, so that you can use familiar julia multiple dispatch to extract the needed information from your input Expr.
The need of abstracting upon Expr-types like functions is already recognized by the widespread MacroTools.jl. There you have support for functions (and arguments) by a set of helpers like splitdef and combinedef which go from Expr to Dict and back.
ExprParsers.jl is different from MacroTools.jl in that it 100% focuses on this kind of object-orientation, extended to many more Expr-types like where syntax, type annotations, keyword arg, etc. In addition, ExprParsers are well typed, composable and extendable in that you can easily write your own parser object.
When working with ExprParsers, you first construct your (possibly nested) parser, describing in detail what you expect as the input. Then you safely parse given expressions and dispatch on the precise ExprParser types. Finally, you can mutate the parsed results and return the manipulated version, or simply extract information from it.
Stephan Sahm
Stephan Sahm is founder of the Julia consultancy Jolin.io, and organizer of the Julia User Group Munich Meetup. In his academic days, he certified as Master of Applied Stochastics, Master and Bachelor of Cognitive Science, and Bachelor of Mathematics/Informatics. Since more than 5 years Stephan Sahm works as senior consultant for Data Science and Engineering, now bringing Julia to industry.
Stephan Sahm's top interest are in green computing, functional programming, probabilistic programming, real time analysis, big data, applied machine learning and in general industry applications of Julia.
Aside Julia and sustainable computing, he likes chatting about Philosophy of Mind, Ethics, Consciousness, Artificial Intelligence and other Cognitive Science topics.
This speaker also appears in:
IsDef.jl: maintainable type inference
SimpleMatch.jl, NotMacro.jl and ProxyInterfaces.jl
https://pretalx.com/juliacon2023/talk/BFQVMX/
REPL Without a Pause: Bringing VimBindings.jl to the Julia REPL
07-28, 14:30–15:00 (US/Eastern), 26-100
VimBindings.jl is a Julia package that emulates vim, the popular text editor, directly in the Julia REPL. This talk will illuminate the context in which a REPL-hacking package runs by taking a deep dive into the Julia REPL code, and articulate the modifications VimBindings.jl makes to introduce novel functionality. The talk will also describe design problems that emerge at the intersection of the REPL and vim paradigms, and the choices made to attempt a coherent fusion of the two.
Vim is a ubiquitous text editor found on almost every modern operating system. Vim (and its predecessor vi) has a storied history as a primary contender in the “editor wars”, its modal editing paradigm often pinned against the modeless, extensibility-oriented Emacs.
Vim users often tout its speed and ease of use, at least after stomaching a steep learning curve. Once a user has learned vim they might question why their fingers should leave home-row, even when they aren’t using vim. Their muscle memory can be applied across many applications by using vim emulation plugins or packages: browsers (vimium and vim vixen), email clients (mutt), IDE plugins (vscode-neovim for vs-code, ideavim for IntelliJ), and shell modes (zsh, bash, fish). Vim emulation can even be used to interact with an operating system: sway for Linux users, AppGrid for MacOS users, or evil mode for Emacs users.
Finally, users can use vim emulation in the Julia REPL. In this talk I will describe how VimBindings.jl works, as well as the design considerations borrowed from other vim emulation implementations in its development. I will take a deep dive into the Julia REPL code and describe how the package introduces new functionality to the REPL, I will also discuss the unique challenges faced during the creation of VimBindings.jl, and the not-so-elegant solutions developed to solve them.
Github repo: https://github.com/caleb-allen/VimBindings.jl
Caleb Allen
Caleb Allen is a software engineer and the author of VimBindings.jl, a package that brings the power and elegance of Vim to the Julia REPL. He has worked in various startups, developing applications and systems in languages such as Java, Kotlin, and Python, among others. He has a passion for building tools and infrastructure that make software development more enjoyable and productive. He also enjoys learning new programming languages as a hobby, and he discovered Julia in 2020 during the pandemic. Since then, he has been fascinated by Julia's features and performance, and has enjoyed learning and contributing to the Julia ecosystem. He is excited to share his experience and insights developing VimBindings.jl with the Julia community at JuliaCon.
3pm slot is particularly tricky
3:30 though is clear:
https://pretalx.com/juliacon2023/talk/M8PLZV/
Machine Learning on Server Side with Julia and WASM
07-28, 15:30–16:00 (US/Eastern), 32-124
Julia is a high-performance programming language that has gained traction in the machine-learning community due to its simplicity and speed. The talk looks at how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI). The talk will go over the benefits of using WASM and WASI for building such as improved performance and security
As the demand for machine learning applications grows, so does the need for efficient and performant solutions. Julia is a high-performance programming language that has gained traction in the machine learning community due to its simplicity and speed. We will look at how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI). We will go over the benefits of using WASM and WASI for deployment, such as improved performance and security. In addition, we will demonstrate how to run Julia code on a WASM virtual machine and use WASI to interact with the underlying operating system. Attendees will have a better understanding of the subject by the end of this talk.
Table of Content:
1. Introduction to server side machine learning
2. How can Julia be used for machine learning
3. What is WebAssembly (WASM) and the WebAssembly System Interface (WASI)
4. how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface
5. Demonstration
Shivay Lamba
Shivay Lamba is a software developer specializing in DevOps, Machine Learning and Full Stack Development.
He is an Open Source Enthusiast and has been part of various programs like Google Code In and Google Summer of Code as a Mentor.
He is actively involved in community work as well. He is a TensorflowJS SIG member, Mentor in OpenMined and CNCF Service Mesh Community and has given talks at various conferences like Github Satellite, Voice Global, Fossasia Tech Summit, TensorflowJS Show & Tell.
https://pretalx.com/juliacon2023/talk/YFN8CY/
Automatic Differentiation for Statistical and Topological Losses
07-28, 16:00–16:30 (US/Eastern), 32-124
This session's header image
We present a new Julia library, TDAOpt.jl, which provides a unified framework for automatic differentiation and gradient-based optimization of statistical and topological losses using persistent homology. TDAOpt.jl is designed to be efficient and easy to use as well as highly flexible and modular. This allows users to easily incorporate topological regularization into machine learning models in order to optimize shapes, encode domain-specific knowledge, and improve model interpretability
Persistent homology is a mathematical framework for studying topological features of data, such as connected components, loops, and voids. It has a wide range of applications, including data analysis, computer vision, and shape optimization. However, the use of persistent homology in optimization and machine learning has been limited by the difficulty of computing derivatives of topological quantities.
In our presentation, we will introduce the basics of persistent homology and demonstrate how to use our library to optimize statistical and topological losses in a variety of settings, including shape optimization of point clouds and generative models. We will also discuss the benefits of using Julia for this type of work and how our library fits into the broader Julia ecosystem.
We believe it will be of interest to a wide range of practitioners, including machine learning researchers and practitioners, as well as those working in fields related to topology and scientific computing.
Siddharth Vishwanath
https://pretalx.com/juliacon2023/talk/LENGPQ/ (not live-streamed/recorded)
So you think you know how to take derivatives?
07-28, 16:30–17:00 (US/Eastern), 32-141
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
Derivatives are seen as the "easy" part of learning calculus: a few simple rules, and every function's derivatives are at your fingertips! But these basic techniques can turn bewildering if you are faced with much more complicated functions like a matrix determinant (what is a derivative "with respect to a matrix" anyway?), the solution of a differential equation, or a huge engineering calculation like a fluid simulation or a neural-network model. And needing such derivatives is increasingly common thanks to the growing prevalence of machine learning, large-scale optimization, and many other problems demanding sensitivity analysis of complex calculations. Although many techniques for generalizing and applying derivatives are known, that knowledge is currently scattered across a diverse literature, and requires students to put aside their memorized rules and re-learn what a derivative really is: linearization. In 2022 and 2023, Alan and I put together a one-month, 16-hour "Matrix Calculus" course at MIT that refocuses differential calculus on the linear algebra at its heart, and we hope to remind you that derivatives are not a subject that is "done" after your second semester of calculus.
https://pretalx.com/juliacon2023/talk/N3RRSG/
Closing Ceremony
07-28, 17:00–17:30 (US/Eastern), 26-100
As JuliaCon 2023 comes to a close, join us for a memorable farewell ceremony to celebrate a week of learning, collaboration, and innovation. We'll recap the highlights of the conference, thank our sponsors and volunteers, and recognize outstanding contributions to the Julia community. Don't miss this opportunity to say goodbye to old and new friends, and leave with inspiration for your next Julia project. Safe travels!
(And then hacking option on Friday at Kiva 5:30-11pm; and then at Kiva and Star on Saturday 10am-4pm or so)
no subject
Date: 2023-07-28 12:25 pm (UTC)Jeremy Kepner: Sparsity: Practice-to-Theory-to-Practice
This is a very nice Google search: Jeremy Kepner Sparsity: Practice-to-Theory-to-Practice
This is also a nice search: hyper-sparse
E.g. "Hypersparse Network Flow Analysis of Packets with GraphBLAS", https://arxiv.org/abs/2209.05725 (Jeremy Kepner is the last author)