I so want this to be true
Jul. 25th, 2023 02:40 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
"The First Room-Temperature Ambient-Pressure Superconductor" arxiv.org/abs/2307.12008
discussion makes one to be cautiously hopeful: news.ycombinator.com/item?id=36864624
Meanwhile, JuliaCon has started at MIT.
discussion makes one to be cautiously hopeful: news.ycombinator.com/item?id=36864624
Meanwhile, JuliaCon has started at MIT.
no subject
Date: 2023-07-27 06:36 pm (UTC)https://pretalx.com/juliacon2023/talk/WRHJPD/
ExprParsers.jl: Object Orientation for Macros
07-28, 14:00–14:30 (US/Eastern), 26-100
You want to build a complex macro? ExprParsers.jl gives you many prebuilt expression parsers - for functions, calls, args, wheres, macros, ... - so that you don't need to care about the different ways these high-level Expr-types can be represented in Julia syntax. Everything is well typed, so that you can use familiar julia multiple dispatch to extract the needed information from your input Expr.
The need of abstracting upon Expr-types like functions is already recognized by the widespread MacroTools.jl. There you have support for functions (and arguments) by a set of helpers like splitdef and combinedef which go from Expr to Dict and back.
ExprParsers.jl is different from MacroTools.jl in that it 100% focuses on this kind of object-orientation, extended to many more Expr-types like where syntax, type annotations, keyword arg, etc. In addition, ExprParsers are well typed, composable and extendable in that you can easily write your own parser object.
When working with ExprParsers, you first construct your (possibly nested) parser, describing in detail what you expect as the input. Then you safely parse given expressions and dispatch on the precise ExprParser types. Finally, you can mutate the parsed results and return the manipulated version, or simply extract information from it.
Stephan Sahm
Stephan Sahm is founder of the Julia consultancy Jolin.io, and organizer of the Julia User Group Munich Meetup. In his academic days, he certified as Master of Applied Stochastics, Master and Bachelor of Cognitive Science, and Bachelor of Mathematics/Informatics. Since more than 5 years Stephan Sahm works as senior consultant for Data Science and Engineering, now bringing Julia to industry.
Stephan Sahm's top interest are in green computing, functional programming, probabilistic programming, real time analysis, big data, applied machine learning and in general industry applications of Julia.
Aside Julia and sustainable computing, he likes chatting about Philosophy of Mind, Ethics, Consciousness, Artificial Intelligence and other Cognitive Science topics.
This speaker also appears in:
IsDef.jl: maintainable type inference
SimpleMatch.jl, NotMacro.jl and ProxyInterfaces.jl
https://pretalx.com/juliacon2023/talk/BFQVMX/
REPL Without a Pause: Bringing VimBindings.jl to the Julia REPL
07-28, 14:30–15:00 (US/Eastern), 26-100
VimBindings.jl is a Julia package that emulates vim, the popular text editor, directly in the Julia REPL. This talk will illuminate the context in which a REPL-hacking package runs by taking a deep dive into the Julia REPL code, and articulate the modifications VimBindings.jl makes to introduce novel functionality. The talk will also describe design problems that emerge at the intersection of the REPL and vim paradigms, and the choices made to attempt a coherent fusion of the two.
Vim is a ubiquitous text editor found on almost every modern operating system. Vim (and its predecessor vi) has a storied history as a primary contender in the “editor wars”, its modal editing paradigm often pinned against the modeless, extensibility-oriented Emacs.
Vim users often tout its speed and ease of use, at least after stomaching a steep learning curve. Once a user has learned vim they might question why their fingers should leave home-row, even when they aren’t using vim. Their muscle memory can be applied across many applications by using vim emulation plugins or packages: browsers (vimium and vim vixen), email clients (mutt), IDE plugins (vscode-neovim for vs-code, ideavim for IntelliJ), and shell modes (zsh, bash, fish). Vim emulation can even be used to interact with an operating system: sway for Linux users, AppGrid for MacOS users, or evil mode for Emacs users.
Finally, users can use vim emulation in the Julia REPL. In this talk I will describe how VimBindings.jl works, as well as the design considerations borrowed from other vim emulation implementations in its development. I will take a deep dive into the Julia REPL code and describe how the package introduces new functionality to the REPL, I will also discuss the unique challenges faced during the creation of VimBindings.jl, and the not-so-elegant solutions developed to solve them.
Github repo: https://github.com/caleb-allen/VimBindings.jl
Caleb Allen
Caleb Allen is a software engineer and the author of VimBindings.jl, a package that brings the power and elegance of Vim to the Julia REPL. He has worked in various startups, developing applications and systems in languages such as Java, Kotlin, and Python, among others. He has a passion for building tools and infrastructure that make software development more enjoyable and productive. He also enjoys learning new programming languages as a hobby, and he discovered Julia in 2020 during the pandemic. Since then, he has been fascinated by Julia's features and performance, and has enjoyed learning and contributing to the Julia ecosystem. He is excited to share his experience and insights developing VimBindings.jl with the Julia community at JuliaCon.
3pm slot is particularly tricky
3:30 though is clear:
https://pretalx.com/juliacon2023/talk/M8PLZV/
Machine Learning on Server Side with Julia and WASM
07-28, 15:30–16:00 (US/Eastern), 32-124
Julia is a high-performance programming language that has gained traction in the machine-learning community due to its simplicity and speed. The talk looks at how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI). The talk will go over the benefits of using WASM and WASI for building such as improved performance and security
As the demand for machine learning applications grows, so does the need for efficient and performant solutions. Julia is a high-performance programming language that has gained traction in the machine learning community due to its simplicity and speed. We will look at how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface in this talk (WASI). We will go over the benefits of using WASM and WASI for deployment, such as improved performance and security. In addition, we will demonstrate how to run Julia code on a WASM virtual machine and use WASI to interact with the underlying operating system. Attendees will have a better understanding of the subject by the end of this talk.
Table of Content:
1. Introduction to server side machine learning
2. How can Julia be used for machine learning
3. What is WebAssembly (WASM) and the WebAssembly System Interface (WASI)
4. how Julia can be used to build machine learning models on the server using WebAssembly (WASM) and the WebAssembly System Interface
5. Demonstration
Shivay Lamba
Shivay Lamba is a software developer specializing in DevOps, Machine Learning and Full Stack Development.
He is an Open Source Enthusiast and has been part of various programs like Google Code In and Google Summer of Code as a Mentor.
He is actively involved in community work as well. He is a TensorflowJS SIG member, Mentor in OpenMined and CNCF Service Mesh Community and has given talks at various conferences like Github Satellite, Voice Global, Fossasia Tech Summit, TensorflowJS Show & Tell.
https://pretalx.com/juliacon2023/talk/YFN8CY/
Automatic Differentiation for Statistical and Topological Losses
07-28, 16:00–16:30 (US/Eastern), 32-124
This session's header image
We present a new Julia library, TDAOpt.jl, which provides a unified framework for automatic differentiation and gradient-based optimization of statistical and topological losses using persistent homology. TDAOpt.jl is designed to be efficient and easy to use as well as highly flexible and modular. This allows users to easily incorporate topological regularization into machine learning models in order to optimize shapes, encode domain-specific knowledge, and improve model interpretability
Persistent homology is a mathematical framework for studying topological features of data, such as connected components, loops, and voids. It has a wide range of applications, including data analysis, computer vision, and shape optimization. However, the use of persistent homology in optimization and machine learning has been limited by the difficulty of computing derivatives of topological quantities.
In our presentation, we will introduce the basics of persistent homology and demonstrate how to use our library to optimize statistical and topological losses in a variety of settings, including shape optimization of point clouds and generative models. We will also discuss the benefits of using Julia for this type of work and how our library fits into the broader Julia ecosystem.
We believe it will be of interest to a wide range of practitioners, including machine learning researchers and practitioners, as well as those working in fields related to topology and scientific computing.
Siddharth Vishwanath
https://pretalx.com/juliacon2023/talk/LENGPQ/ (not live-streamed/recorded)
So you think you know how to take derivatives?
07-28, 16:30–17:00 (US/Eastern), 32-141
Join us for ASE-60, where we celebrate the life and the career of Professor Alan Stuart Edelman, on the occasion of his 60th birthday: https://math.mit.edu/events/ase60celebration/
Derivatives are seen as the "easy" part of learning calculus: a few simple rules, and every function's derivatives are at your fingertips! But these basic techniques can turn bewildering if you are faced with much more complicated functions like a matrix determinant (what is a derivative "with respect to a matrix" anyway?), the solution of a differential equation, or a huge engineering calculation like a fluid simulation or a neural-network model. And needing such derivatives is increasingly common thanks to the growing prevalence of machine learning, large-scale optimization, and many other problems demanding sensitivity analysis of complex calculations. Although many techniques for generalizing and applying derivatives are known, that knowledge is currently scattered across a diverse literature, and requires students to put aside their memorized rules and re-learn what a derivative really is: linearization. In 2022 and 2023, Alan and I put together a one-month, 16-hour "Matrix Calculus" course at MIT that refocuses differential calculus on the linear algebra at its heart, and we hope to remind you that derivatives are not a subject that is "done" after your second semester of calculus.
https://pretalx.com/juliacon2023/talk/N3RRSG/
Closing Ceremony
07-28, 17:00–17:30 (US/Eastern), 26-100
As JuliaCon 2023 comes to a close, join us for a memorable farewell ceremony to celebrate a week of learning, collaboration, and innovation. We'll recap the highlights of the conference, thank our sponsors and volunteers, and recognize outstanding contributions to the Julia community. Don't miss this opportunity to say goodbye to old and new friends, and leave with inspiration for your next Julia project. Safe travels!
(And then hacking option on Friday at Kiva 5:30-11pm; and then at Kiva and Star on Saturday 10am-4pm or so)