A | B | C | D | E | F | G | H | I | J | K | L | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | ||||||||||||
2 | GMUM PROJECTS | |||||||||||
3 | ||||||||||||
4 | ||||||||||||
5 | ||||||||||||
6 | ||||||||||||
7 | PROJECT NAME | KEYWORDS | DESCRIPTION | PEOPLE | CONTACT | STUDENT NEEDED | REQUIREMENTS/ADDITIONAL INFO | |||||
8 | Gaussian Splatting | neural rendering | Dwa ciekawe projekty: 1. usuwanie artefaktów w tle GS za pomoca GAN. 2. generowanie obiektów 3D z jednego zdjęcia | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
9 | Diffiusion models | diffiusion models | Dwa ciekawe projekty: 1. implicit reprezention of diffiution - celem jest zmniejszenie architektury, tak by działał na zwykłych kartach 2. Diffiution na wagach sieci | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
10 | NeRF (Obiekty 3D) | NeRF | Kilka projektów w kontekście generowania obiektów 3D. | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
11 | Continual learning | continual learning, meta learning | Projekt w stadium rozwoju opiera się na wykorzystaniu hypernetworków do zadania continual learningu | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
12 | Video Generation | generative models | Projekt opiera się o reprezentację dżwięku za pomocą sieci | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
13 | Autoregressive/diffusion-based Normalizing Flows | Normalizing Flows, Generative Models, Diffusion Models, Autoregressive Models | The aim of this project is to create a new Normalizing Flow models in a style of diffusion ore autoregressive models. | Marcin Sendera | marcin.sendera[AT]gmail.com | YES | PyTorch | |||||
14 | Continual Few-Shot learning | Continual Learning, Meta-Learning, Few-Shot Learning | The aim of this project is to attack the problem of continual few-shot learning. | Marcin Sendera | marcin.sendera[AT]gmail.com | YES | PyTorch | |||||
15 | Normalizing Flows in Meta-Learning | Normalizing Flows, Generative Models, Meta-Learning, Few-Shot Learning | The aim of this project is to utilize the Normalizing Flows and other Generative models for architectures used for very large Meta-Learning datasets. | Marcin Sendera | marcin.sendera[AT]gmail.com | YES | PyTorch, Tensorflow | |||||
16 | Extended Gaussian Processes in Meta-Learning (few-shot regression) | Normalizing Flows, Generative Models, Gaussian Processes, Meta-Learning, Few-Shot Learning | The aim of this project is to extend the Non-Gaussian Gaussian Processes framework in the sense of flexibility (e.g., adding the conditional case based on support set data). | Marcin Sendera, Tomasz Kuśmierczyk | marcin.sendera[AT]gmail.com | YES | PyTorch | |||||
17 | Few shot learning (with hypernetworks) | Few shot learning , Meta Learning | Few-shot learning (FSL), also referred to as low-shot learning (LSL) in few sources, is a type of machine learning problems where the training dataset contains limited information. Few-shot learning aims for Deep learning models to predict the correct class of instances when a small amount of examples are available in the training dataset. | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
18 | Meta learning (continual learning + few shot) | Few shot learning , Meta Learning | Our goal is to vewyfie MAML algorithm in to continual learnign | Przemysław Spurek Jacek Tabor i inni | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
19 | Subnetworks hunting | Optimization, sparsity, pruning, grokking | The goal of the project is to examine the structure of the network during the early stage training. Detecting such parts of the model may be crucial in order to increase the computational efficiency during first epochs of gradient-based approaches. There has been numerous of relevant articles, spanning similar topics, but it is essential to build mechanism that facilitate automatic subnetwork detection as well as efficient way to achieve so. Contact me if you want to discuss the project / get familiar with the literature. You would join to the on-going project with some intermediate successes. | Mateusz Pyla | mateusz.pyla[AT]doctoral.uj.edu.pl. | |||||||
20 | Outliners regularization | Optimization, data attribution | In this project we will try to take advantage of various data atttribution methods, such as TRAK (https://arxiv.org/abs/2303.14186) and apply it to the problem of right selection within the batch at the early stages of training. Let me know if something is unclear. You would join to the on-going project with some intermediate successes. | Mateusz Pyla, Igor Podolak | mateusz.pyla[AT]doctoral.uj.edu.pl. | |||||||
21 | Bayesian Last Layer | Bayesian learning, Bayesian Neural Networks | We will inspect various forms of applying Bayesian methods solely on the last layer in order to maintain the overall capacities of BNNs, but to make the training more effective. This year, a new, promising method has emerged .The base paper and the code is available: https://github.com/VectorInstitute/vbll | Mateusz Pyla, Tomek Kuśmierczyk | mateusz.pyla[AT]doctoral.uj.edu.pl. | |||||||
22 | Bayesian Continual learning | continual learning, Bayesian learning, optimization | The approach is based on this year paper https://arxiv.org/pdf/2405.18758 | Mateusz Pyla | mateusz.pyla[AT]doctoral.uj.edu.pl. | YES | Nice to have strong maths background.The student needs to know how to program in either tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
23 | Bayesian Flow Networks | diffusion models, computer vision, various type of data | Last year, BFNs were proposed as an universal generative approach to effective learn various types of data (images, text, tabular) using denoising scheme (the same paradigm as in diffusion models). We would look at the abilities of BFNs applied to the image and tabular data. https://arxiv.org/abs/2308.07037 | Mateusz Pyla | mateusz.pyla[AT]doctoral.uj.edu.pl. | |||||||
24 | GAN + NeRF | generative models for images, hypernetworks | In the project we will use GAN for generating NeRF reprezentaions NeRF https://www.matthewtancik.com/nerf | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
25 | NeRF for modeling 3D faces | generative models for images, hypernetworks | In the project we will use GAN for generating NeRF reprezentaions NeRF https://www.matthewtancik.com/nerf | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
26 | Early exit for visual transformer | early exit, visual transformer | Jacek Tabor, Klaudia Balazy | jacek.tabor[AT]uj.edu.pl | YES | Pytorch | ||||||
27 | contrastive learning with the use of memorization | contrastive learning | Celem pracy jest stworzenie nowego podejścia do contrastive learning, gdzie użyjemy uczenia sieci która ma zapamiętywać losowe etykiety. | Jacek Tabor, Marek Śmieja | jacek.tabor[AT]uj.edu.pl | YES | Pytorch | |||||
28 | Ensemble learning | ensemble, hypernetworks | Celem projekty jest zbudowanie jednej sieci (za pomocą konceptu hypernetwork) która pozwoli generować różnorodne sieci do rozwiązania jednego zadania. | Jacek Tabor, Przemysław Spurek | jacek.tabor[AT]uj.edu.pl | YES | Pytorch | |||||
29 | Generowanie obiektów 3D za pomocą NeRF | deep neural networks | Celem projektu jest generowanie wysokiej jakości modeli twarzy luckich za pomocą algorytmu NeRF | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | PyTorch | |||||
30 | Analizowanie chmur punktów 3D | deep neural networks | Projek polega na tworzeniu modeli generatywnych dedykowanych do obiektów 3D. | Przemysław Spurek Jacek Tabor i inni | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
31 | Augumenting SGD optimizers with low dimensional 2nd order information | SGD optimization | SGD optimization is currently dominated by 1st order methods like ADAM. Augumenting them with 2nd order information would suggest e.g. optimal step size. Such online parabola model can be maintained nearly for free by extracting linear trends of the gradient sequence and is planned to be included for improving standard methods like ADAM. Introduction with code: https://community.wolfram.com/groups/-/m/t/2917908 | Jarek Duda | jaroslaw.duda[at]uj.edu.pl | YES | The students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis. | |||||
32 | Hierarchical correlation reconstruction, multidirectional artificial neurons | modelling joint distribution, non-stationarity, biology-inspired NN | HCR combines advantages of machinie learning and statistics: at very low cost offers MSE optimal model for joint distribution of multiple variables as a polynomial, by decomposing statistical dependnecies into (interpretable) mixed moments. Intro with code: https://community.wolfram.com/groups/-/m/t/3017754 Biology-inspired NN based on HCR: https://arxiv.org/pdf/2405.05097 | Jarek Duda | jaroslaw.duda[at]uj.edu.pl | YES | Preferred experience in mathemtics, statistics. | |||||
33 | Conditional generative models | generative models, multi-label learning, learning with partial labels | Celem jest umożliwienie sterowania procesem generowania obiektów w modelach typu StyleGAN (oraz innych). Dotychczas zrealizowane rozwiązanie: https://ojs.aaai.org/index.php/AAAI/article/view/20843.Oprócz opracowania tego typu modeli chcemy je stosować np. przy generowaniu przykładów kontrfaktycznych (counterfactual examples), kóre umożliwiają wyjaśnianie predykcji modeli ML. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
34 | Hierarchical methods | unsupervised learning, self-supervised learning, hierarchical clustering, | Sieci neuronowe osiągają bardzo dobre wyniki w typowych zadanich klasyfikacji czy klastrowania. Interpretacja uzyskiwanych wyników jest jednak ograniczona. W tym projekcie chcemy skupić się na konstrukcji modeli głębokich, które dokonują predykcji podejmując sekwencję decyzji. Intuicyjnie, będziemy się zajmować modelami które budują drzewo/graf decyzyjny. W szczególności pozwala to na lepszą interpretację wyników. Dotychczas zrealizowany model: https://arxiv.org/abs/2107.13214 | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
35 | Learning from tabular data | tabular data, hypernetworks, ensemble learning | Sieci neuronowe osiągają bardzo dobre wyniki w popularnych domenach takich jak obrazy czy teksty. W przypadku danych tabelarycznych (tabular data), które nie posiadają lokalnej struktury płytkkie metody takie jak random forest czy XGBoost osiągają często lepsze wyniki. Celem jest opracowanie modeli sieci neuronowych, które będą mogły być z powodzeniem stosowane dla danych tabelarycznych. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
36 | Contrastive self-supervised learning | self-supervised learning, data augmentation | Model self-supervised learning pozwalają budować reprezentację danych w sposób nienadzorowanmy, która później może być z powodzeniem wykorzystana do klasyfikacji czy klastrowania. Wiele jednak zależy od użytych augmentacji. Jeśli augmentacji zmienia klasę, to trudno później wykorzystać taką reprezentację w klasyfikacji. W prokecie chcemy budować model które budują reprezentację mniej czułą na rodzaj użytych augmentacji. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
37 | Molecular generative models | chemical molecules, generative models | W projekcie chcemy budować modele pozwalające na generowanie molekuł chemicznych. W szczególności chcemy, żeby generowane modeluły spełniały zadanie warunki typu aktywność, rozpuszczalność, ilość pierścieni, itp. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
38 | Segmentation learning with region of confidence | deep neural networks, learning mehods | What if not all objects on single sample are labeled? Can we develop method for learnig deep models in such case? | Krzysztof Misztal | krzysztof.misztal@uj.edu.pl | YES | The students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis. | |||||
39 | Generative Models in Drug Design | deep learning, cheminformatics, generative models | We want to find a way to generate chemical molecules that is useful from the perspective of drug design process. Primarily, the new generative model should be able to follow given structural constraints and generate structural analogs, i.e. molecules similar to previously seen promising compounds. | Tomasz Danel, Łukasz Maziarka | tomasz.danel[AT]ii.uj.edu.pl | YES | PyTorch and Tensorflow | |||||
40 | Adaptations of travel behaviour in agent-based urban mobility model | agent-based, reinforcerment learning, two-sided mobility, urban mobility | You will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular travellers can decide among platforms (Uber or Lyft) or opt-out (use public transport) - based on prevoious experiences. They, however, need to learn which actions are otpimal for them (subjectively). You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents. | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow | |||||
41 | Distributed learning for CAV in two-sided mobility market | agent-based, reinforcerment learning, two-sided mobility, urban mobility | You will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular drivers (or Connected Autonomous Vehicles- CAV) can reposition and wait for requests in different part of the city. They, however, need to learn when and where it is efficient to reposition. You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents. | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow | |||||
42 | A benchmark for comparing early-exiting and conditional computation methods and models | conditional computation, pruning, computationally efficient deep models | The project aims to create a unified benchmark for multiple methods that reduce the inference time of deep learning models. We begin by focusing on early-exiting methods. You task will be to reimplement a conditional computation method from a selected published paper into our common codebase. Conditional computation methods are usually simple to implement and provide significant computational cost savings. We intend to publish the benchmark with the accompanying analysis as a paper in a rank A* conference. We have tasks appropriate for both beginners and people with experience. | Bartosz Wójcik, Maciej Wołczyk | bartwojc[AT]gmail.com | YES | The student needs to know how to program in PyTorch (In order to undestand the present code and implement new methods). | |||||
43 | Ride-pooling heuristics: combinatorial explosion and supervised learning | supervised learning, graph theory, urban mobility, transport | You will apply ride-pooling algorithm which pools travellers (e.g. of Uber) into attractive groups. You will use ExMAS (https://github.com/RafalKucharskiPK/ExMAS) which provides exact analytical search in the combinatorically exploding search space (e.g. for 1000 trip requests there is almost a googol number of possible 5-person groups). You will use this analytical results to train supervised machine learning and explore the ways to make the search space searchable. | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow, optimization, ILP, networkX | |||||
44 | Predicting pooled rides in Chicago (dataset of 5mln trips) | supervised learning, graph theory, urban mobility, transport, XAI | In the dataset of 5mln trips made with Uber in Chicago some of the are pooled - travel together (20%). Which and why. Can we use this dataset to sucesfully predict which of them will be pooled and what factors influence it? This paper scratched the surface, let's go deeper: https://doi.org/10.1177/0361198120915886 | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow, pandas, XAI | |||||
45 | Extending the Continual World Benchmark | continual learning, reinforcement learning, transfer learning | We are looking to extend our Continual World benchmark: https://arxiv.org/abs/2105.10919 in various ways, such as learning from pixels, implementing new RL algorithms, implementing new continual learning methods, exploring sparse rewards setting. | Maciej Wołczyk | maciej.wolczyk[at]gmail.com | YES | Python, preferably TensorFlow 2 | |||||
46 | Continual learning with quick remembering | continual learning, transfer learning | In continual learning we want to understand the phenomenon of catastrophic forgetting - network quickly losing performance at previously learned tasks after encountering new tasks. However, this usually concerns zero-shot forgetting -- what happens if we're allowed to quickly recall the old problem before attempting to solve it? The goal of the project is to investigate how quickly we can recall the "forgotten" knowledge and build a CL method optimized for that | Maciej Wołczyk | maciej.wolczyk[at]gmail.com | YES | PyTorch or Jax | |||||
47 | How to handle data shift during fine tuning RL models? | reinforcement learning, transfer learning, continual learning | Foundation models have delivered impressive outcomes in areas like computer vision and language processing, but not as much in reinforcement learning. It has been demonstrated that fine-tuning on compositional tasks, where certain aspects of the environment may only be revealed after extensive training, is susceptible to catastrophic forgetting. In this situation, a pre-trained model may lose valuable knowledge before encountering parts of the state space that it can handle. The goal of the project is to research and develop methods which could prevent forgetting of the pretrained weights and therefore get better performace by leveraging previous knowledge. Highly recommend section 4.4 Minecraft RL | Maciej Wołczyk, Bartłomiej Cupiał | maciej.wolczyk[at]gmail.com | YES | PyTorch | |||||
48 | model dyfuzyjny do generowania filmów | celem projektu jest użycie pretrenowanego modelu dyfuzyjnego dla zdjęć do generowania filmów, chcemy zaadoptować/zmodyfikować technikę guided diffusion: https://arxiv.org/pdf/2105.05233.pdf. Optymalnie byłoby uzyskać wyniki porównywalne z https://research.nvidia.com/labs/toronto-ai/VideoLDM | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
49 | MIL teoretycznie uzasadniony | (bazujący na tej pracy z ECAI: https://arxiv.org/pdf/2306.10535.pdf). Chcemy stworzyć model multiple instance learning który buduje klasyfikator korzystając z interpretowalnego backbona. Dodatkowo będzie agregował w sposób interpretowalne informację pochodzące z paru różnych MILów. | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
50 | Odpowiednik GradCam (https://arxiv.org/abs/1610.02391) | interpretacja ostatniej warstwy kownolucyjnej w modelach konwolucyjnych. Chcemy zrobić model segmentacyjny który jest interpretowalny analogicznie do GradCam, ale taki który posiadałby informację pozwalającą na prawidłową klasyfikację. | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
51 | U-NET+transformer | nałożenie modelu transformorowego na coś w rodzauj U-NET który segmentuje [miałoby nauczyć się usuwania tła w sposób nienadzorowany] | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
52 | Prototypy dla wideo | chcielibyśmy zrobić model prototypowy (https://dl.acm.org/doi/pdf/10.1145/3447548.3467245), ktróy pozwoli na analizę wideo, być może z potencjalną segmentacją [https://arxiv.org/pdf/2301.12276] | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
53 | Drzewa decyzyjne (niebinarne) | bazujące na interpretowalnym backbonie [albo b-cos albo prototypy]. Mamy zaproponowany model drzewa, który pozwala na zapisywanie struktury drzew niebinarnych. | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
54 | Model kontrastywny | dla USG serca [w tej samej klasie są dwie różne projekcje, albo kawałki filmu]. Chcemy by model zbudował reprezentację, która jest niezależna od wyboru projekcji. | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
55 | MIL bazujący na iteracyjnym poprawianiu etykiet | Celem jest zrobienie modelu MIL (multiple instance learning), który będzie się uczył oszczędnie pamięciowo, chcemy się oprzeć na pracy https://proceedings.mlr.press/v202/zhu23l/zhu23l.pdf | Jacek Tabor, Łukasz Struski | jacek.tabor[AT]uj.edu.pl lukasz.struski[AT]uj.edu.pl | YES | PyTorch | ||||||
56 | Multi-Agent Reinforcement Learning with PettingZoo and Torchrl | reinforcement learning | The project aims to explore and evaluate the performance of agents in a Multi-Agent Reinforcement Learning (MARL) setup using the Multi Particle Environments (MPE) from the PettingZoo library. The agents will be trained using state-of-the-art reinforcement learning algorithms such as Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO), implemented with the TorchRL library. | Anastasia Psarou, Rafał Kucharski | anastasia.psarou@uj.edu.pl | YES | Pytorch | |||||
57 | Multi-Agent Reinforcement Learning for Traffic Signal Control in SUMO-RL | reinforcement learning | This project focuses on leveraging the SUMO-RL framework to train multiple reinforcement learning (RL) agents for optimized traffic signal control in a simulated traffic environment. By utilizing state-of-the-art RL algorithms, the goal is to manage traffic flow efficiently and minimize congestion. | Anastasia Psarou, Rafał Kucharski | anastasia.psarou@uj.edu.pl | YES | Python | |||||
58 | Bayesian Neural Networks with count-based priors | bayesian deep learning | The main idea is to utilize meta-data from data as priors for (B)NNs. Based on the assumption that a model should make more confident predictions for better-known objects, we may try to have priors (=distributions with parameters set based on metadata) imposing more concentrated predictive distributions for better-known objects, e.g., bias predictions based on counts of feature values . For example, a model trained on 1000 rows with country=US and 10 rows with country=PL should provide more certain predictions for US than PL. Papers: [Being Bayesian about Categorical Probability, ICML 2020], [Can a Confident Prior replace a Cold Posterior?] | Tomasz Kuśmierczyk | t.kusmierczyk@uj.edu.pl | YES | PyTorch | |||||
59 | 2-stage Variational Inference for Bayesian Neural Networks | bayesian deep learning | Replicate approach used in learning with Laplace approximation for Variational Inference: split learning into two stages: (1) finding posterior mode (2) approximating covariance matrix (e.g. for linearized model). | Tomasz Kuśmierczyk | t.kusmierczyk@uj.edu.pl | YES | PyTorch | |||||
60 | Uncertaintiy propagation in partially stochastic Bayesian Neural Networks | bayesian deep learning | Recent research, such as the study "Do Bayesian Neural Networks Need to Be Fully Stochastic?", suggests that fully stochastic models may not always be necessary. Partially stochastic models—where only the first or last layer is stochastic—can achieve comparable performance with reduced computational complexity.In partially stochastic models, a significant challenge arises: the effective propagation of input and output uncertainty through stochastic layers. Without proper uncertainty propagation, the benefits of stochastic modeling diminish, leading to less reliable predictions. Furthermore, exploring alternative structures within stochastic layers, such as integrating diffusion models conditioned by outputs from previous layers, may offer improved performance and uncertainty estimation. | Tomasz Kuśmierczyk | t.kusmierczyk@uj.edu.pl | YES | PyTorch | |||||
61 | Uncertaintiy propagation in partially stochastic Bayesian Neural Networks | bayesian deep learning | Recent research, such as the study "Do Bayesian Neural Networks Need to Be Fully Stochastic?", suggests that fully stochastic models may not always be necessary. Partially stochastic models—where only the first or last layer is stochastic—can achieve comparable performance with reduced computational complexity.In partially stochastic models, a significant challenge arises: the effective propagation of input and output uncertainty through stochastic layers. Without proper uncertainty propagation, the benefits of stochastic modeling diminish, leading to less reliable predictions. Furthermore, exploring alternative structures within stochastic layers, such as integrating diffusion models conditioned by outputs from previous layers, may offer improved performance and uncertainty estimation. | Tomasz Kuśmierczyk | t.kusmierczyk@uj.edu.pl | YES | PyTorch | |||||
62 | Enhancing LoRA-XS Performance with Bayesian Low-Rank Adaptation for Efficient Fine-Tuning | lora, peft, NLP, bayesian deep learning | This project aims to use Bayesian Low-Rank Adaptation (Bayesian LoRA) to improve LoRA-XS performance for fine-tuning large models. By incorporating Bayesian inference techniques, we focus on enhancing model calibration while reducing computational overhead. The project will involve implementing and testing this integrated method on language models, with the goal of producing a more efficient and robust fine-tuning framework. | Klaudia Bałazy, Jacek Tabor | klaudia.balazy@doctoral.uj.edu.pl | YES | PyTorch | |||||
63 | LoRA-XS and Adaptive Efficient Fine-Tuning | lora, peft, NLP, adaptation | This project explores the integration of LoRA-XS with AdaLoRA methods to create a more adaptive and efficient framework for fine-tuning large pre-trained models. LoRA-XS significantly reduces the number of parameters through low-rank adaptation, while AdaLoRA dynamically allocates parameter budgets based on the importance of weight matrices. By combining these approaches, the project aims to optimize parameter usage and enhance fine-tuning performance, particularly in resource-constrained environments. | Klaudia Bałazy, Jacek Tabor | klaudia.balazy@doctoral.uj.edu.pl | YES | PyTorch | |||||
64 | Logic-based prototypical-parts derived classification of images | interpretability, prototypical parts, image classification | The project aims to develop a model based on prototypical parts that is able to present an explanation in a logic formula, e.g. concept a and conept b are important to classify as C1, but class C2 is when concept c or concept d is present, to introduce more precise explanation types. | Dawid Rymarczyk, Łukasz Struski, Jacek Tabor | dawid.rymarczyk@uj.edu.pl | YES | PyTorch | |||||
65 | Improve prototypical-parts networks with human feedback | interpretability, prototypical parts, human-computer interaction | This topic relates to publication "Concept-level Debugging of Part-Prototype Networks": https://arxiv.org/pdf/2205.15769 For students interested in psychology | Bartosz Zieliński, Tomasz Michalski, Dawid Rymarczyk | bartosz.zielinski@uj.edu.pl | YES | PyTorch | |||||
66 | More accurate attribution maps | explainability, attribution maps | This topic relates to publication "B-Cos Networks: Alignment Is All We Need for Interpretability": https://openaccess.thecvf.com/content/CVPR2022/papers/Bohle_B-Cos_Networks_Alignment_Is_All_We_Need_for_Interpretability_CVPR_2022_paper.pdf For students interested in mathematics | Bartosz Zieliński, Adam Wróbel | bartosz.zielinski@uj.edu.pl | YES | PyTorch | |||||
67 | Interpretable drug discovery | interpretability, graph neural networks, drug discovery | This topic relates to publication "Contrastive learning of image-and structure-based representations in drug discovery": https://openreview.net/pdf?id=OdXKRtg1OG For students interested in chemistry | Bartosz Zieliński, Tomasz Danel, Dawid Rymarczyk | bartosz.zielinski@uj.edu.pl | YES | PyTorch | |||||
68 | Embodied self-supervised learning | self-supervised learning, autonomous mobile robots | This topic relates to publication "CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion": https://proceedings.neurips.cc/paper_files/paper/2022/file/16e71d1a24b98a02c17b1be1f634f979-Paper-Conference.pdf For students interested in robotics | Bartosz Zieliński, Turhan Can Kargin, Marcin Przewięźlikowski | bartosz.zielinski@uj.edu.pl | YES | PyTorch | |||||
69 | Embodied active visual exploration | active visual exploration, autonomous mobile robots | This topic relates to publication "AdaGlimpse: Active Visual Exploration with Arbitrary Glimpse Position and Scale": https://arxiv.org/pdf/2404.03482 For students interested in robotics | Bartosz Zieliński, Adam Pardyl | bartosz.zielinski@uj.edu.pl | YES | PyTorch | |||||
70 | Visual model merging based using attribute interpolation | model merging, autonomous mobile robots | This topic relates to publication "An Attribute Interpolation Method in Speech Synthesis by Model Merging": https://arxiv.org/pdf/2407.00766 For students interested in robotics | Bartosz Zieliński, Marcin Osial | bartosz.zielinski@uj.edu.pl | YES | PyTorch | |||||
71 | Multi-agent reinforcement learning | reinforcement learning, game theory, transport systems, multi-agent systems | You will participate in creating a benchmark for the routing games with humans and Connected Autonomous Vehicles in the loop (making actions with RL algorithms) | Rafał Kucharski, Onur Akman | coexistence@uj.edu.pl | YES | PyTorch | |||||
72 | GMUM PROJECTS (Not actively looking for students) | |||||||||||
73 | This projects are not acctively looking for students. However, if you find a project interesting and would like to know more about it, you may contact the person under the "CONTACT" tab. | |||||||||||
74 | ||||||||||||
75 | New representations for molecules | deep learning, cheminformatics | We would like to find new representations for molecules. We could work on both, new embedding methods for molecules or in new input representations for graph neural networks. | Łukasz Maziarka, Tomasz Danel, Agnieszka Pocha | lukasz.maziarka@student.uj.edu.pl | NO | Python, PyTorch, TensorFlow | |||||
76 | Continual Learning with Experience Replay | deep learning, continual learning, experience replay | Catastrophic forgetting occurs in neural networks - when training on a new task, the model completely forgets what it has learned on previous tasks. One of the most efficient ways of combating catastrophic forgetting is experience replay - retraining on small set of examples from previous tasks. However promising, this approach has not been properly explored. This project aims to understand and improve experience replay methods for continual learning. | Maciej Wołczyk, Marek Śmieja, Jacek Tabor | maciej.wolczyk[AT]gmail.com | NO | PyTorch basics | |||||
77 | Conditional Computation for Efficient Inference | deep learning, model compression, conditional computation | Human brain can adaptively change the amount of resources used for the current task. However neural networks constantly use all their available resources for any example. This is not only inconsistent with the biological perspective, but also highly inefficient. We work on approach that uses less resources (layers, neurons) for easy examples and uses all available resources for difficult examples. | Maciej Wołczyk, Bartosz Wójcik, Marek Śmieja, Jacek Tabor | maciej.wolczyk[AT]gmail.com | NO | PyTorch basics | |||||
78 | Hypernetworks Knowledge Distillation | deep learning, teacher-student, computer vision, super resolution | We are using two hypernetworks in teacher-student manner to solve superresolution task. | Maciej Wolczyk, Szymon Rams, Tomasz Danel, Łukasz Maziarka | lukasz.maziarka@student.uj.edu.pl | NO | ||||||
79 | Aspect Level Sentiment Classification | Natural Language Processing, Sentiment Classification, Attention Modeling, Deep Learning | Aspect-level sentiment classification aims to identify the sentiment expressed towards some aspects given context sentences. Recently Hu et al. proposed CAN (https://arxiv.org/pdf/1812.10735.pdf). However, such a mechanism suffers from a major drawback. Specifically, it seems to overly focus on a few frequent words with sentiment polarities and little attention is laid upon low-frequency ones. Our potential solution to the mentioned issue is supervised attention. | Magdalena Wiercioch | mgkwiercioch[AT]gmail.com | NO | ||||||
80 | Molecule Representation for Predicting Drug-Target Interaction | Deep Learning, Representation Learning, Cheminformatics | An essential part of the drug discovery process is predicting drug-target interactions. However, the process is expensive in terms of both time and cost. Obviously, a precisely learned molecule representation in a drug-target interactions model could contribute to developing personalized medicine which will help many patient cohorts. We want to propose a few molecule representations based on various concepts including deep neural networks but not limited to. | Magdalena Wiercioch | mgkwiercioch[AT]gmail.com | NO | ||||||
81 | Deep learning for molecular design | Deep Learning, Cheminformatics, Molecular Design | Searching new molecules in areas such as drug discovery usually starts from the core structures of candidate molecules to optimize the properties of interest. Our present work proposes a graph recurrent generative model for molecular structures. The model incorporates side information into recurrent neural network. | Magdalena Wiercioch | mgkwiercioch[AT]gmail.com | NO | ||||||
82 | Optimization in deep policy gradient methods | Deep Learning, Reinforcement Learning, Optimization | Deep policy gradient methods, which are currently one of the most used tools of reinforcement learning researchers, have some non-obvious optimization properties. We investigate such questions as: why is PPO more efficient than TRPO, how important are various tricks used when implementing PPO, how can we improve the sample efficiency of these methods? | Maciej Wołczyk | maciej.wolczyk[AT]gmail.com | NO | ||||||
83 | Optimization in neural networks without backpropagation and gradients | Deep Learning, Optimization, Bio-inspired | Neuroscientific studies of mechanisms of the learning in the brain suggest that backpropagation (and especially backpropagation through time, as in RNNs) may not be a viable method of learning in neural structures. We want to explore other, more biologically justified approaches to this problem | Jacek Tabor, Aleksandra Nowak, Maciej Wołczyk | maciej.wolczyk[AT]gmail.com | NO | ||||||
84 | Fidelity-Weighted Learning | neural networks, deep neural networks, learning methods | Fidelity weighted learning { it is a student-teacher method for learning from labels of varying quality. | Krzysztof Misztal, Agnieszka Pocha | krzysztof.misztal@uj.edu.pl | NO | ||||||
85 | ||||||||||||
86 | Neural networks adapting to datasets: learning network size and topology | deep learning, network pruning, neural architectures | We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a standard gradient-based training. The resulting network has the structure of a graph tailored to the particular learning task and dataset. The obtained networks can also be trained from scratch and achieve virtually identical performance. We explore the properties of the network architectures for a number of datasets of varying difficulty observing systematic regularities. | Romuald Janik, Aleksandra Nowak | aleksandrairena.nowak[AT]doctoral.uj.edu.pl | NO | https://arxiv.org/abs/2006.12195 | |||||
87 | Relationship between disentanglement and multi-task learning | deep learning, disentanglement learning, multi-task learning, hard parameter sharing | One of the main arguments behind studying disentangled representation is the assumption that they can be easily reused in different tasks. At the same time finding a joint, adaptable representation of data is one of the key challenges in many multi-task learning settings. The aim of the project is to take a closer look at the relationship between disentanglement and multi-task learning. | Łukasz Maziarka, Aleksandra Nowak, Andrzej Bedychaj, Maciej Wołczyk | aleksandrairena.nowak[AT]doctoral.uj.edu.pl | NO | ||||||
88 | Explaining metabolic stability (pilot study) | cheminformatics, explainability, web service | Metabolic stability is one of several molecular properties optimised in drug design pipelines. It is connected with the duration of desirable therapeutic effect of the drug. The exact mechanisms of drug metabolism are yet to be discovered. Explaining predictions of machine learning models can give us ideas about which chemical structures are important. We plan to publish a pilot study in International Journal of Molecular Sciences (IF: 4.56, number of ministerial points: 30). Planned submission date is by the end of the year. | Agnieszka Pocha, Sabina Podlewska | agnieszka.pocha[at]doctoral.uj.edu.pl | NO | The AI and explainability parts are mostly done. We need a student to build an interactive webpage which will present the existing explanations and generate new ones (using provided python API) for molecules uploaded by users. We require knowledge on designing and building webpages, as well as standard technologies including HTML, CSS and Javascript. | |||||
89 | Semi-supervised siamese neural network | siamese neural network, semi-supervised learning | Siamese networks are used to label new example when we have a large number of classes: https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf. We focus on designing semi-supervised versions of siamese networks, where we use only small number of labeled examples. Example of such method: https://www.ijcai.org/proceedings/2017/0358.pdf. | Marek Śmieja | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
90 | Clustering with pairwise constraints | clustering, pairiwise constraints (must-link, cannot-link), semi-superbised learninig | Clustering is ill-posed problem. Making use of a small number of labeled data, we can specify what we mean by similarity. This is the area of semi-supervised clusterng. We will focus on constructing discriminativ clustering models which take the information about labeled data into account. | Marek Śmieja | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
91 | Generative model in multi-label case | generative models, flow models, disentanglement, multi-label classification | We construct a semi-supervised generative model for partially labeled data. More precisely, every example can be labeled using many binary attributes but we have only access to a few labels. Such a generative model should allow for generating new examples with desired properties (labels). | Marek Śmieja, Maciej Wołczyk, Łukasz Maziarka | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
92 | Multi-output regression for object tracking | generative models, image processing, object detection, hypernetworks, clustering, regression | We consider the problem of predicting object position. As the future is uncertain to a large extent, modeling the uncertainty and multimodality of the future states is of great relevance. For this purpose a generative model that takes the multimodality and uncertainty into account. The of the project is to to compare with https://openaccess.thecvf.com/content_CVPR_2019/papers/Makansi_Overcoming_Limitations_of_Mixture_Density_Networks_A_Sampling_and_Fitting_CVPR_2019_paper.pdf | Marek Śmieja, Jacek Tabor, Przemysław Spurek | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
93 | Auto-encoder with discrete latent space | auto-encoder, generative models, discrete variables, reparametrization trick, importance sampling | Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this project we want to an auto-encoder model with discrete latent space, see https://arxiv.org/pdf/1611.01144.pdf | Marek Śmieja, Jacek Tabor, Łukasz Struski, Klaudia Balazy | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
94 | Learning neural networks from missing data | missing data, convolutiona neural networks | Project concers the problem of training convolutional neural networks on missing data directly. We plan to extend the following model: https://papers.nips.cc/paper/7537-processing-of-missing-data-by-neural-networks.pdf | Marek Śmieja, Łukasz Struski, Jacek Tabor | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
95 | Pharmacophoric Autoencoder | deep learning, cheminformatics, autoencoders | The plan is to create an autoencoder that works with molecular graphs embedded in 3D space. Such a generative model should be able to generate compounds with some pre-defined 3D constraints. As an input it takes 3D molecular graphs and pharmacophoric features represented as 3D points. The 3D positions are generated by a molecular docking software. The pharmacophoric (3D) constraints are given in the latent space of this model. | Tomasz Danel, Łukasz Maziarka, Bartosz Podkanowicz, Artur Kasymov, Sabina Podlewska, Marek Śmieja, Igor Podolak | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch | |||||
96 | Non-Gaussian Gaussian Processes | deep learning, meta-learning, few-shot learning, regression, gaussian processes, normalizing flows | Gaussian Processes (GPs) have been widely used in machine learning to model distributions over functions, with applications including multi-modal regression, time-series prediction, and few-shot learning. GPs are particularly useful in the last application since they rely on Normal distributions and, hence, enable closed-form computation of the posterior probability function. Unfortunately, because the resulting posterior is not flexible enough to capture complex distributions, GPs assume high similarity between subsequent tasks – a requirement rarely met in real-world conditions. In this work, we address this limitation by leveraging the flexibility of Normalizing Flows to modulate the posterior predictive distribution of the GP, which makes the GP posterior locally non-Gaussian. | Marcin Sendera, Jacek Tabor, Aleksandra Nowak, Andrzej Bedychaj, Massimiliano Patacchiola, Tomasz Trzciński, Przemysław Spurek, Maciej Zięba | marcin.sendera[AT]doctoral.uj.edu.pl | NO | PyTorch | |||||
97 | Better Knowledge Transfer for Non-Gaussian Gaussian Processes (Continuous Object Tracking) | deep learning, meta-learning, few-shot learning, object tracking, gaussian processes, normalizing flows | This project is based on the previous Non-Gaussian Gaussian Processes (NGGP) work. We are going to enhance the NGGPs' flexibility by introducing the full information coming from the support data in Few-Shot Learning setting. The main aim is to enable much faster learning and knowledge transfer from task to task. We also want to apply such a solution for solving the continuous object tracking problem. Specifically, to generate the probability density function of object position over time, assuming the knowledge of the discrete set of past positions. | Marcin Sendera, Jacek Tabor, Massimiliano Patacchiola, Przemysław Spurek, Maciej Zięba, Rafał Nowak | marcin.sendera[AT]doctoral.uj.edu.pl | NO | PyTorch | |||||
98 | Normalizing Flows in Anomaly Detection | deep learning, generative models, normalizing flows, anomaly detection | Anomaly detection is the problem of identification the abnormal or novel data from the normal ones. We propose to utilize the flexibility of Normalizing Flows models, which could be treated as bijections from the original space to new one latent space. This property allows for introducing the algebraic expressions for the points' density in a latent space explicitly. We utilize various forms for objective functions combined with different flow models to discover anomalies. | Marcin Sendera, Jacek Tabor | marcin.sendera[AT]doctoral.uj.edu.pl | NO | PyTorch | |||||
99 | Mol2Image Translation: Generation of High-Content Images Based on Chemical Structures | convolutional neural networks, computer vision, cheminformatics | High-Content Screening is a technology that accelerates drug discovery pipelines by providing a fast method for screening vast numbers of chemical compounds and analysing the output images from fluorescence microscopy (Google "high-content screening", the images are quite cool). In this project we want to create a generative model that transforms chemical structures (SMILES strings or molecular graphs) into images representing phenotypical changes in cellular systems. | Tomasz Danel, Adriana Borowa | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch or Tensorflow | |||||
100 | Interpretable Uncertainty in Molecular Data | cheminformatics, machine learning, uncertainty | In drug discovery pipelines, it is important to accurately predict molecular properties for yet unseen chemical compounds. Of course, the quality of predictions decreases as we depart from the known chemical space. There are some methods for assessing uncertainty of predictions for out-of-domain data, e.g. conformal prediction, but they do not provide us with explanations, why the predictions were marked as uncertain. We plan to create an interpretable uncertainty estimator that indicates "strange" chemical fragments (different from the training set) in the compound. | Tomasz Danel, Anna Bielawska | tomasz.danel[AT]ii.uj.edu.pl | NO | ML basics (scikit-learn, numpy), basic understanding of chemistry is a plus (solid high-school level) |