Active Entries
- 1: "Narrow AGI" this year?
- 2: Tao on coordinate vs coordinate-free math reasoning
- 3: "Aging as a loss of goal-directedness"
- 4: New integrated mode for GPT-4 in ChatGPT+
- 5: Китайский новый год начнётся 10-го февраля
- 6: Automating the Search for Artificial Life with Foundation Models
- 7: "Anatomy of a Formal Proof"
- 8: C to safe Rust automatic translation using LLMs and dynamic analysis
- 9: GonzoML
- 10: Transformers as a Computational Model (workshop)
Style Credit
- Style: Neutral Good for Practicality by
Expand Cut Tags
No cut tags
no subject
Date: 2023-07-27 12:05 am (UTC)https://pretalx.com/juliacon2023/talk/U9ABUR/
Keynote: Tim Davis
07-26, 09:00–09:45 (US/Eastern), 26-100
Timothy A. Davis, PhD, is a Professor in the Computer Science and Engineering Department at Texas A&M University, and a Fellow of SIAM, ACM, and IEEE. He serves as an associate editor for the ACM Transactions on Mathematical Software. In 2018 he was awarded the Walston Chubb Award for Innovation, by Sigma Xi.
Prof. Davis is a world leader in the creation of innovative algorithms and widely-used software for solving large sparse matrix problems that arise in a vast range of real-world technological and social applications. He is the author of Suitesparse
https://pretalx.com/juliacon2023/talk/HXFG7X/
IsDef.jl: maintainable type inference
07-26, 12:00–12:30 (US/Eastern), 32-141
IsDef.jl provides maintainable type inference in that it
1. Uses code generation where possible to deterministically infer the types of your code
2. Allows you to overload type-inference with your custom implementation
3. If neither code generation works nor a custom type inference rule is given, it falls back to Core.Compiler.return_type, wrapped by some safety nets
In this talk IsDef.jl is presented, along with typical applications and details about the implementation.
I am super happy to announce the release of IsDef.jl. Since years I wanted to be able to check whether some function is defined for my types. Plain applicable was not enough for me, because it just inspects the first layer, and if this is a generic function (like syntactic sugar), it won't give meaningful results. That was the motivation.
It turns out, you need full-fledged type-inference for this, however Julia's default type-inference is not intended for usage in packages. Inference is not guaranteed to be stable, may be indeterministic, and may change on any minor Julia version, which makes it really hard for maintainability. The purpose of Julia's default type-inference is just code-optimization and as such it is an implementation detail to Julia.
Hence, there is a need for another type inference system, or at least a layer around Julia's default inference which makes it maintainable. Welcome to IsDef.
The highlevel interface of IsDef is very simple, consisting of two functions isdef and Out. The usage of these will be explained, along with implementation details and design decisions.
The package version is 0.1.x - suggestions, critics, and improvement ideas about the way inference is made maintainable are very welcome so that IsDef.jl can become the go-to package for maintainable type inference.
Stephan Sahm
Stephan Sahm is founder of the Julia consultancy Jolin.io, and organizer of the Julia User Group Munich Meetup. In his academic days, he certified as Master of Applied Stochastics, Master and Bachelor of Cognitive Science, and Bachelor of Mathematics/Informatics. Since more than 5 years Stephan Sahm works as senior consultant for Data Science and Engineering, now bringing Julia to industry.
Stephan Sahm's top interest are in green computing, functional programming, probabilistic programming, real time analysis, big data, applied machine learning and in general industry applications of Julia.
Aside Julia and sustainable computing, he likes chatting about Philosophy of Mind, Ethics, Consciousness, Artificial Intelligence and other Cognitive Science topics.
This speaker also appears in:
ExprParsers.jl: Object Orientation for Macros
SimpleMatch.jl, NotMacro.jl and ProxyInterfaces.jl
https://pretalx.com/juliacon2023/talk/CSG8NU/
Exploring the State of Machine Learning for Biological Data
07-26, 12:00–12:30 (US/Eastern), 32-124
Exploring the use of Julia, in analyzing biological data. Discussion of libraries and packages, challenges and opportunities of using machine learning on biological data, and examples of past and future applications.
This talk, "Exploring the State of Machine Learning for Biological Data in Julia," will delve into the use of the high-performance programming language, Julia, in analyzing biological data. We will discuss various libraries and packages available in Julia, such as BioJulia and Flux.jl, and the benefits of using Julia for machine learning in the field of biology. Additionally, the challenges and opportunities that arise when using machine learning techniques on biological data, such as dealing with high-dimensional and heterogeneous data, will be addressed. The talk will also include examples of how machine learning has been applied to biological data in the past and what the future holds for this field.
Edmund Miller
PhD Student at UT Dallas in the Functional Genomics Lab
nf-core maintainer
This speaker also appears in:
Unlocking the Power of Genomic Analysis in Julia
Cancelled, but remarkable: https://pretalx.com/juliacon2023/talk/SXKKZE/
Fine-tuning GPT-3 to understand the Julia Ecosystem
07-26, 12:10–12:20 (US/Eastern), 32-123
GPT-3 is a large language model from OpenAI that can do many general-purpose natural language processing (NLP) tasks. However, given that Julia is a smaller ecosystem, GPT-3 often times lacks context about Julia and doesn't produce great results. This talk goes over how to fine-tune GPT-3 with OpenAI.jl.
OpenAI.jl provides an interface to the OpenAI API. This talk will cover how to use the API, how the fine-tuning process works (including how to create a training dataset), and some amazing applications/workflows enabled by giving GPT-3 more context of the Julia ecosystem.
Logan is the Developer Community Advocate for the Julia programming language and is on the Board of Directors at NumFOCUS.
https://github.com/JuliaML/OpenAI.jl
https://pretalx.com/juliacon2023/talk/FGZBBB/
Interpretable Machine Learning with SymbolicRegression.jl
07-26, 14:30–15:00 (US/Eastern), 32-D463 (Star)
This session's header image
SymbolicRegression.jl is a state-of-the-art symbolic regression library written from scratch in Julia using a custom evolutionary algorithm. The software emphasizes high-performance distributed computing, and can find arbitrary symbolic expressions to optimize a user-defined objective – thus offering a very interpretable type of machine learning. SymbolicRegression.jl and its Python frontend PySR have been used for model discovery in over 30 research papers, from astrophysics to economics.
SymbolicRegression.jl is an open-source library for practical symbolic regression, a type of machine learning that discovers human-interpretable symbolic models. SymbolicRegression.jl was developed to democratize and popularize symbolic regression for the sciences, and is built on a high-performance distributed backend, a flexible search algorithm, and interfaces with several deep learning packages. The hand-rolled internal search algorithm is a mixed evolutionary algorithm, which consists of a unique evolve-simplify-optimize loop, designed for optimization of unknown real-valued constants in newly-discovered empirical expressions. The backend is highly optimized, capable of fusing user-defined operators into SIMD kernels at runtime with LoopVectorization.jl, performing automatic differentiation with Zygote.jl, and distributing populations of expressions to thousands of cores across a cluster using ClusterManagers.jl. In describing this software, I will also share a new benchmark, “EmpiricalBench,” to quantify the applicability of symbolic regression algorithms in science. This benchmark measures recovery of historical empirical equations from original and synthetic datasets.
In this talk, I will describe the nuts and bolts of the search algorithm, its efficient evaluation scheme, DynamicExpressions.jl, and how SymbolicRegression.jl may be used in scientific workflows. I will review existing applications of the software (https://astroautomata.com/PySR/papers/). I will also discuss interfaces with other Julia libraries, including SymbolicUtils.jl, as well as SymbolicRegression.jl's PyJulia-enabled link to the ScikitLearn ecosystem in Python.
Miles Cranmer
Miles Cranmer is an Assistant Professor in Data Intensive Science at the University of Cambridge. He received his PhD in Astrophysics from Princeton University and his BSc in Physics from McGill University.
Miles is interested in the automation of scientific research with machine learning. His current focus is the development of interpretable machine learning methods, especially symbolic regression, as well as applying them to multi-scale dynamical systems.