ABCDEFGHIJKL
1
2
GMUM PROJECTS
3
4
5
6
7
PROJECT NAMEKEYWORDSDESCRIPTIONPEOPLECONTACTSTUDENT NEEDEDREQUIREMENTS/ADDITIONAL INFO
8
Diffiusion modelsdiffiusion modelsDwa ciekawe projekty: 1. implicit reprezention of diffiution - celem jest zmniejszenie architektury, tak by działał na zwykłych kartach 2. Diffiution na wagach sieciPrzemysław Spurekprzemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
9
Continual learningcontinual learning, meta learningProjekt w stadium rozwoju opiera się na wykorzystaniu hypernetworków do zadania continual learningu Przemysław Spurekprzemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
10
Vide Generationgenerative modelsProjekt opiera się o reprezentację dżwięku za pomocą sieciPrzemysław Spurekprzemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
11
Autoregressive/diffusion-based Normalizing Flows Normalizing Flows, Generative Models, Diffusion Models, Autoregressive ModelsThe aim of this project is to create a new Normalizing Flow models in a style of diffusion ore autoregressive models.Marcin Senderamarcin.sendera[AT]gmail.comYESPyTorch
12
Continual Few-Shot learningContinual Learning, Meta-Learning, Few-Shot LearningThe aim of this project is to attack the problem of continual few-shot learning.Marcin Senderamarcin.sendera[AT]gmail.comYESPyTorch
13
Normalizing Flows in Meta-LearningNormalizing Flows, Generative Models, Meta-Learning, Few-Shot LearningThe aim of this project is to utilize the Normalizing Flows and other Generative models for architectures used for very large Meta-Learning datasets.Marcin Senderamarcin.sendera[AT]gmail.comYESPyTorch, Tensorflow
14
Extended Gaussian Processes in Meta-Learning (few-shot regression)Normalizing Flows, Generative Models, Gaussian Processes, Meta-Learning, Few-Shot LearningThe aim of this project is to extend the Non-Gaussian Gaussian Processes framework in the sense of flexibility (e.g., adding the conditional case based on support set data).Marcin Sendera, Tomasz Kuśmierczykmarcin.sendera[AT]gmail.comYESPyTorch
15
16
Few shot learning (with hypernetworks)Few shot learning , Meta Learning Few-shot learning (FSL), also referred to as low-shot learning (LSL) in few sources, is a type of machine learning problems where the training dataset contains limited information. Few-shot learning aims for Deep learning models to predict the correct class of instances when a small amount of examples are available in the training dataset.
Przemysław Spurek
przemyslaw.spurek[AT]uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
17
Meta learning (continual learning + few shot)Few shot learning , Meta Learning Our goal is to vewyfie MAML algorithm in to continual learnignPrzemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
18
Bayesian Continual learningcontinual learning, Bayesian learning, optimizationRecently my academic focus is on continual learning. I am interested in Bayesian learning, optimization. Any reasonable combination (potentially with generative modelling) would be a fantastic project for me. If you don't have any particular ideas, I do have something to offer!Mateusz Pylamateusz.pyla[AT]doctoral.uj.edu.pl.YESNice to have strong maths background.The student needs to know how to program in either tensorflow and pytorch (In order to undestand the present code and implement new methods).
19
GAN + NeRFgenerative models for images, hypernetworksIn the project we will use GAN for generating NeRF reprezentaions NeRF https://www.matthewtancik.com/nerf Przemysław Spurekprzemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
20
NeRF for modeling 3D facesgenerative models for images, hypernetworksIn the project we will use GAN for generating NeRF reprezentaions NeRF https://www.matthewtancik.com/nerf Przemysław Spurekprzemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
21
Early exit for visual transformerearly exit, visual transformerJacek Tabor, Klaudia Balazyjacek.tabor[AT]uj.edu.plYESPytorch
22
contrastive learning with the use of memorizationcontrastive learningCelem pracy jest stworzenie nowego podejścia do contrastive learning, gdzie użyjemy uczenia sieci która ma zapamiętywać losowe etykiety.Jacek Tabor, Marek Śmiejajacek.tabor[AT]uj.edu.plYESPytorch
23
Differentiable splitting of batchearly exitCelem jest zaimplementowanie - potencjalnie w CUDA, funkcji która pozwoli na wygodne rozdzielanie batchu w sposób różniczkowalny, główne zastosowanie widzimy w early exit Jacek Tabor, Klaudia Balazyjacek.tabor[AT]uj.edu.plNOPytorch, CUDA
24
Ensemble learningensemble, hypernetworksCelem projekty jest zbudowanie jednej sieci (za pomocą konceptu hypernetwork) która pozwoli generować różnorodne sieci do rozwiązania jednego zadania. Jacek Tabor, Przemysław Spurekjacek.tabor[AT]uj.edu.plYESPytorch
25
Early exits in Reinforcement Learningconditional computationOur objective is to extend our previous work on early-exiting models to the reinforcement learning domain.Bartosz Wójcikbartwojc[AT]gmail.comYESPyTorch
26
Generowanie obiektów 3D za pomocą NeRFdeep neural networksCelem projektu jest generowanie wysokiej jakości modeli twarzy luckich za pomocą algorytmu NeRFPrzemysław Spurekprzemyslaw.spurek[AT]uj.edu.pl.YESPyTorch
27
Analizowanie chmur punktów 3Ddeep neural networksProjek polega na tworzeniu modeli generatywnych dedykowanych do obiektów 3D.
Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
28
Augumenting SGD optimizers with low dimensional 2nd order information
SGD optimization
SGD optimization is currently dominated by 1st order methods like Adam. Augumenting them with 2nd order information would suggest e.g. optimal step size. Such online parabola model can be maintained nearly for free by extracting linear trends of the gradient sequence (arXiv: 1907.07063), and is planned to be included for improving standard methods like Adam.
Jarek Dudajaroslaw.duda[at]uj.edu.plYESThe students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis.
29
Hierarchical correlation reconstructionmodelling joint distribution, non-stationarityHCR combines advantages of machinie learning and statistics: at very low cost offers MSE optimal model for joint distribution of multiple variables as a polynomial, by decomposing statistical dependnecies into (interpretable) mixed moments. It allows to extract and exploit very weak statistical dependencies, not accessible for other methods like KDE. It can also model their time evolution for non-stationary time series e.g. in financial data. This project devolps the method and searches for its further applications (slides: https://www.dropbox.com/s/7u6f2zpreph6j8o/rapid.pdf ). Jarek Dudajaroslaw.duda[at]uj.edu.plYESPreferred experience in mathemtics, statistics.
30
Conditional generative modelsgenerative models, multi-label learning, learning with partial labelsCelem jest umożliwienie sterowania procesem generowania obiektów w modelach typu StyleGAN (oraz innych). Dotychczas zrealizowane rozwiązanie: https://ojs.aaai.org/index.php/AAAI/article/view/20843.Oprócz opracowania tego typu modeli chcemy je stosować np. przy generowaniu przykładów kontrfaktycznych (counterfactual examples), kóre umożliwiają wyjaśnianie predykcji modeli ML. Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
31
Hierarchical methodsunsupervised learning, self-supervised learning, hierarchical clustering, Sieci neuronowe osiągają bardzo dobre wyniki w typowych zadanich klasyfikacji czy klastrowania. Interpretacja uzyskiwanych wyników jest jednak ograniczona. W tym projekcie chcemy skupić się na konstrukcji modeli głębokich, które dokonują predykcji podejmując sekwencję decyzji. Intuicyjnie, będziemy się zajmować modelami które budują drzewo/graf decyzyjny. W szczególności pozwala to na lepszą interpretację wyników. Dotychczas zrealizowany model: https://arxiv.org/abs/2107.13214Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
32
Learning from tabular datatabular data, hypernetworks, ensemble learningSieci neuronowe osiągają bardzo dobre wyniki w popularnych domenach takich jak obrazy czy teksty. W przypadku danych tabelarycznych (tabular data), które nie posiadają lokalnej struktury płytkkie metody takie jak random forest czy XGBoost osiągają często lepsze wyniki. Celem jest opracowanie modeli sieci neuronowych, które będą mogły być z powodzeniem stosowane dla danych tabelarycznych.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
33
Contrastive self-supervised learningself-supervised learning, data augmentationModel self-supervised learning pozwalają budować reprezentację danych w sposób nienadzorowanmy, która później może być z powodzeniem wykorzystana do klasyfikacji czy klastrowania. Wiele jednak zależy od użytych augmentacji. Jeśli augmentacji zmienia klasę, to trudno później wykorzystać taką reprezentację w klasyfikacji. W prokecie chcemy budować model które budują reprezentację mniej czułą na rodzaj użytych augmentacji.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
34
Molecular generative modelschemical molecules, generative modelsW projekcie chcemy budować modele pozwalające na generowanie molekuł chemicznych. W szczególności chcemy, żeby generowane modeluły spełniały zadanie warunki typu aktywność, rozpuszczalność, ilość pierścieni, itp.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
35
Segmentation learning with region of confidencedeep neural networks, learning mehodsWhat if not all objects on single sample are labeled? Can we develop method for learnig deep models in such case?Krzysztof Misztalkrzysztof.misztal@uj.edu.plYESThe students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis.
36
Generative Models in Drug Designdeep learning, cheminformatics, generative modelsWe want to find a way to generate chemical molecules that is useful from the perspective of drug design process. Primarily, the new generative model should be able to follow given structural constraints and generate structural analogs, i.e. molecules similar to previously seen promising compounds.Tomasz Danel, Łukasz Maziarkatomasz.danel[AT]ii.uj.edu.plYESPyTorch and Tensorflow
37
Adaptations of travel behaviour in agent-based urban mobility modelagent-based, reinforcerment learning, two-sided mobility, urban mobilityYou will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular travellers can decide among platforms (Uber or Lyft) or opt-out (use public transport) - based on prevoious experiences. They, however, need to learn which actions are otpimal for them (subjectively). You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents.Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow
38
Distributed learning for CAV in two-sided mobility market agent-based, reinforcerment learning, two-sided mobility, urban mobilityYou will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular drivers (or Connected Autonomous Vehicles- CAV) can reposition and wait for requests in different part of the city. They, however, need to learn when and where it is efficient to reposition. You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents. Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow
39
A benchmark for comparing early-exiting and conditional computation methods and modelsconditional computation, pruning, computationally efficient deep modelsThe project aims to create a unified benchmark for multiple methods that reduce the inference time of deep learning models. We begin by focusing on early-exiting methods. You task will be to reimplement a conditional computation method from a selected published paper into our common codebase. Conditional computation methods are usually simple to implement and provide significant computational cost savings. We intend to publish the benchmark with the accompanying analysis as a paper in a rank A* conference. We have tasks appropriate for both beginners and people with experience.Bartosz Wójcik, Maciej Wołczykbartwojc[AT]gmail.comYESThe student needs to know how to program in PyTorch (In order to undestand the present code and implement new methods).
40
Ride-pooling heuristics: combinatorial explosion and supervised learningsupervised learning, graph theory, urban mobility, transportYou will apply ride-pooling algorithm which pools travellers (e.g. of Uber) into attractive groups. You will use ExMAS (https://github.com/RafalKucharskiPK/ExMAS) which provides exact analytical search in the combinatorically exploding search space (e.g. for 1000 trip requests there is almost a googol number of possible 5-person groups). You will use this analytical results to train supervised machine learning and explore the ways to make the search space searchable.Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow, optimization, ILP, networkX
41
Predicting pooled rides in Chicago (dataset of 5mln trips)supervised learning, graph theory, urban mobility, transport, XAIIn the dataset of 5mln trips made with Uber in Chicago some of the are pooled - travel together (20%). Which and why. Can we use this dataset to sucesfully predict which of them will be pooled and what factors influence it? This paper scratched the surface, let's go deeper: https://doi.org/10.1177/0361198120915886Rafał Kucharskirafal.kucharski__uj.edu.plYESPyTorch or Tensorflow, pandas, XAI
42
Model compression in Transformed-Based Language ModelsNLP, model compression, transformersThe goal of this research is to propose complete methodology for compressing large language models based on Transformer architecture.Klaudia Bałazyklaudia.balazy[at]doctoral.uj.edu.plNOPyTorch
43
Dynamic computations in NLP modelsconditional computation, early exit, NLP, transformerTransformers are the foundation for many well performing neural language processing models. Unfortunately, they require a lot of computational resources which results in slow inference. In this project we aim to leverage conditional computation methods to speed up inference along three axes: depth-wise sparsity (early exits), width-wise sparsity (mixture of experts) and input-wise sparsity (dynamic sequence pruning). Additionally, we would like examine the hypothesis that some data points are easier to process for neural networks. For that purpose, among others, we would like to implement dynamic variant of mixture of experts (MoE) that enables MoE layers to use less resources for easy data points and compare it with difficulty rating extracted from early exit models.Klaudia Bałazyklaudia.balazy[at]doctoral.uj.edu.plYESPyTorch
44
Convolutional Mixture of Expertsconditional computation, efficient neural networksMixture of experts models are currently very popular for Transformer based models. This work intends to test whether the benefits of MoE layers can be transffered to other architecture types such as networks based on convolutional networks. We have tasks appropriate for both beginners and people with experience.Bartosz Wójcikbartwojc[AT]gmail.comYESPyTorch
45
Early exiting while trainingconditional computation, efficient neural networksWe want to extend our work on early-exiting to also accelerate the training process.Bartosz Wójcikbartwojc[AT]gmail.comYESPyTorch
46
HuggingMoleculesmolecular property prediction, Transformer, open-sourceAn open-source library for transformer-based molecular property prediction with a simple and unified API that provides the implementation of several state-of-the-art transformers for molecular property prediction. The library is in the development stage, and there are many interesting things to be implemented: novel transformer-based models, pre-training methods, integration with huggingface caching system, Continous Integration, and a few other things. The complexity of the tasks is diverse, ranging from "good first issue" to "game-changer", so basically, anyone can find something suitable :)Piotr Gaiński, Łukasz Maziarka, Tomasz Danel i Stanisław Jastrzębskipiotr.gainski[at]student.uj.edu.plYESPyTorch
47
Extending the Continual World Benchmarkcontinual learning, reinforcement learning, transfer learningWe are looking to extend our Continual World benchmark: https://arxiv.org/abs/2105.10919 in various ways, such as learning from pixels, implementing new RL algorithms, implementing new continual learning methods, exploring sparse rewards setting.Maciej Wołczykmaciej.wolczyk[at]gmail.comYESPython, preferably TensorFlow 2
48
NLP: Non-deterministic representation of words using Gaussian distributionsnlp, words represenationJacek Tabor, Przemysław Spurek, Klaudia Bałazyklaudia.balazy[at]doctoral.uj.edu.plYESpython, pytorch
49
Continual learning with quick rememberingcontinual learning, transfer learningIn continual learning we want to understand the phenomenon of catastrophic forgetting - network quickly losing performance at previously learned tasks after encountering new tasks. However, this usually concerns zero-shot forgetting -- what happens if we're allowed to quickly recall the old problem before attempting to solve it? The goal of the project is to investigate how quickly we can recall the "forgotten" knowledge and build a CL method optimized for thatMaciej Wołczykmaciej.wolczyk[at]gmail.comYESPyTorch or Jax
50
How to handle data shift during fine tuning RL models?reinforcement learning, transfer learning, continual learningFoundation models have delivered impressive outcomes in areas like computer vision and language processing, but not as much in reinforcement learning. It has been demonstrated that fine-tuning on compositional tasks, where certain aspects of the environment may only be revealed after extensive training, is susceptible to catastrophic forgetting. In this situation, a pre-trained model may lose valuable knowledge before encountering parts of the state space that it can handle. The goal of the project is to research and develop methods which could prevent forgetting of the pretrained weights and therefore get better performace by leveraging previous knowledge. Highly recommend section 4.4 Minecraft RL Maciej Wołczyk, Bartłomiej Cupiałmaciej.wolczyk[at]gmail.comYESPyTorch
51
Is adjustable augmentation all you need for effective contrastive self-supervised methods?deep learning, self-supervised methods, augmentationsContrastive self-supervised learning is a type of unsupervised learning in which a model learns to differentiate between similar and dissimilar data pairs. It involves training the model to maximize the similarity between representations of different augmentations of the same data point while minimizing the similarity between representations of different data points.
More and more models appear in the literature, which enhances the architecture and training of contrastive-based models. However, none of them concentrate on augmentation, which has a crucial impact on the resulting representation space.
In this project, we concentrate on how augmentation can be used to obtain more robust representations and how to modify augmentation policy during training to train them more effectively.
Bartosz Zieliński et al.bartosz.zielinski[AT]uj.edu.plYESPyTorch
52
Rethinking visual transformer input for more effective trainingdeep learning, transformer, effective trainingVisual transformers are a type of deep neural network architecture designed for computer vision tasks, where image or video data is transformed using self-attention mechanisms. This approach allows the network to selectively focus on different regions or features within the input, leading to improved performance on tasks such as object detection, image classification, and segmentation.
The standard visual transformer architecture takes an input image, divides it into a sequence of patches, and processes these patches through multiple layers of self-attention and feedforward networks to extract high-level visual features. Due to a large number of patches, they require significant computational power to be trained.
In this project, we will analyze how to modify the transformer input to limit the number of input patches. For this purpose, we will e.g., consider patches with differing resolutions.
Bartosz Zieliński et al.bartosz.zielinski[AT]uj.edu.plYESPyTorch
53
Effective segmentation of high-resolution imagesdeep learning, image segmentation, high-resolution imagesMost image segmentation methods take an image with its original resolution as input and analyze the image pixels based on various characteristics. However, in real-world applications, like satellite maps or whole-slide histopathology, it is impossible to process the whole image due to its high resolution. One of the possible solutions is to process image patches separately, but then we lose context between them.
In this project, we analyze different approaches, where we choose the most informative patches from the high-resolution image and process only them through the model. We will test different strategies o choosing as few of those patches as possible while obtaining satisfactory segmentation.
Bartosz Zieliński et al.bartosz.zielinski[AT]uj.edu.plYESPyTorch
54
Weakly-supervised image segmentationdeep learning, image segmentation, weak-supervisionTo train image segmentation methods, one typically requires a dataset of labeled images where each pixel or region is labeled with a corresponding class or category. However, obtaining such a well-labeled dataset is almost impossible in real word applications. In fact, in many real-world scenarios, we only have information that the object appears on the image, but no information is given about its segmentation or even location.
In this project, we will introduce methods able to train segmentation solutions based only on such weakly-labeled training data. For this purpose, we will start by adapting recent achievements from the partial label learning domain.
Bartosz Zieliński et al.bartosz.zielinski[AT]uj.edu.plYESPyTorch
55
Interpretable flavor in unsupervised representation learningdeep learning, interpretability, self-supervisionRepresentation learning without class labels can be achieved with self-supervised learning that can be trained with two strategies: contrastive learning or pseudo-labels based on the input data (widely used with transformers architectures). However, obtained representations are hardly interpretable, so we cannot explain what visual properties are represented in the latent space.
In this project, we will develop learning methods that encode the input data into a latent representation with semantically meaningful features. We will start with models consisting of prototypical parts as coded features.
Bartosz Zieliński et al.bartosz.zielinski[AT]uj.edu.plYESPyTorch
56
Generalizing interpretable models to an open-world settingdeep learning, continual learning, interpretabilityThe typical AI-based recognition system predicts class for an image provided by a user, assuming that the image is within the distribution of samples used during training. However, this assumption does not always hold. That is why some of the approaches detect new classes and adjust the model for them. However, they do it in an interpretable way.
In this project, we will introduce interpretability into open-world problems like continuous learning and generalized category discovery. We will start by adopting a prototypical parts approach and then consider various cognitive theories to make them user-oriented.
Bartosz Zieliński et al.bartosz.zielinski[AT]uj.edu.plYESPyTorch
57
58
GMUM PROJECTS (Not actively looking for students)
59
This projects are not acctively looking for students. However, if you find a project interesting and would like to know more about it,
you may contact the person under the "CONTACT" tab.
60
61
New representations for moleculesdeep learning, cheminformaticsWe would like to find new representations for molecules. We could work on both, new embedding methods for molecules or in new input representations for graph neural networks.Łukasz Maziarka, Tomasz Danel, Agnieszka Pochalukasz.maziarka@student.uj.edu.plNOPython, PyTorch, TensorFlow
62
Continual Learning with Experience Replaydeep learning, continual learning, experience replayCatastrophic forgetting occurs in neural networks - when training on a new task, the model completely forgets what it has learned on previous tasks. One of the most efficient ways of combating catastrophic forgetting is experience replay - retraining on small set of examples from previous tasks. However promising, this approach has not been properly explored. This project aims to understand and improve experience replay methods for continual learning.Maciej Wołczyk, Marek Śmieja, Jacek Tabormaciej.wolczyk[AT]gmail.comNOPyTorch basics
63
Conditional Computation for Efficient Inferencedeep learning, model compression, conditional computationHuman brain can adaptively change the amount of resources used for the current task. However neural networks constantly use all their available resources for any example. This is not only inconsistent with the biological perspective, but also highly inefficient. We work on approach that uses less resources (layers, neurons) for easy examples and uses all available resources for difficult examples.Maciej Wołczyk, Bartosz Wójcik, Marek Śmieja, Jacek Tabormaciej.wolczyk[AT]gmail.comNOPyTorch basics
64
Hypernetworks Knowledge Distillationdeep learning, teacher-student, computer vision, super resolutionWe are using two hypernetworks in teacher-student manner to solve superresolution task.Maciej Wolczyk, Szymon Rams, Tomasz Danel, Łukasz Maziarkalukasz.maziarka@student.uj.edu.plNO
65
Aspect Level Sentiment ClassificationNatural Language Processing, Sentiment Classification, Attention Modeling, Deep LearningAspect-level sentiment classification aims to identify the sentiment expressed towards some aspects
given context sentences. Recently Hu et al. proposed CAN (https://arxiv.org/pdf/1812.10735.pdf).
However, such a mechanism suffers from a major drawback. Specifically, it seems to overly focus
on a few frequent words with sentiment polarities and little attention is laid upon low-frequency
ones. Our potential solution to the mentioned issue is supervised attention.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
66
Molecule Representation for Predicting Drug-Target InteractionDeep Learning, Representation Learning, CheminformaticsAn essential part of the drug discovery process is predicting drug-target interactions. However, the
process is expensive in terms of both time and cost. Obviously, a precisely learned molecule
representation in a drug-target interactions model could contribute to developing personalized
medicine which will help many patient cohorts. We want to propose a few molecule representations
based on various concepts including deep neural networks but not limited to.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
67
Deep learning for molecular designDeep Learning, Cheminformatics, Molecular DesignSearching new molecules in areas such as drug discovery usually starts from the core structures of
candidate molecules to optimize the properties of interest. Our present work proposes a graph
recurrent generative model for molecular structures. The model incorporates side information into
recurrent neural network.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
68
Optimization in deep policy gradient methodsDeep Learning, Reinforcement Learning, OptimizationDeep policy gradient methods, which are currently one of the most used tools of reinforcement learning researchers, have some non-obvious optimization properties. We investigate such questions as: why is PPO more efficient than TRPO, how important are various tricks used when implementing PPO, how can we improve the sample efficiency of these methods?Maciej Wołczykmaciej.wolczyk[AT]gmail.comNO
69
Optimization in neural networks without backpropagation and gradientsDeep Learning, Optimization, Bio-inspiredNeuroscientific studies of mechanisms of the learning in the brain suggest that backpropagation (and especially backpropagation through time, as in RNNs) may not be a viable method of learning in neural structures. We want to explore other, more biologically justified approaches to this problemJacek Tabor, Aleksandra Nowak, Maciej Wołczykmaciej.wolczyk[AT]gmail.comNO
70
Fidelity-Weighted Learningneural networks, deep neural networks, learning methodsFidelity weighted learning { it is a student-teacher method for learning from labels of varying quality.Krzysztof Misztal, Agnieszka Pochakrzysztof.misztal@uj.edu.plNO
71
72
Continual Learning in vision tasks continual learning, deep networksAim of the project is to develop a method for continual learning of deep neural network architectures. Such a model should be able to learn new tasks and not forget the previous one, when some part of the model is shared between tasks and the lowest possible old task resources is kept to prevent the forgetting. Jacek Tabor, Igor Podolak, Bartosz Zieliński, Łukasz Struski, Dawid Rymarczykbartosz.zielinski[AT]uj.edu.plNO
73
Neural networks adapting to datasets: learning network size and topologydeep learning, network pruning, neural architecturesWe introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a standard gradient-based training. The resulting network has the structure of a graph tailored to the particular learning task and dataset. The obtained networks can also be trained from scratch and achieve virtually identical performance. We explore the properties of the network architectures for a number of datasets of varying difficulty observing systematic regularities.Romuald Janik,
Aleksandra Nowak
aleksandrairena.nowak[AT]doctoral.uj.edu.plNOhttps://arxiv.org/abs/2006.12195
74
Relationship between disentanglement and multi-task learningdeep learning, disentanglement learning, multi-task learning, hard parameter sharingOne of the main arguments behind studying disentangled representation is the assumption that they can be easily reused in different tasks. At the same time finding a joint, adaptable representation of data is one of the key challenges in many multi-task learning settings. The aim of the project is to take a closer look at the relationship between disentanglement and multi-task learning.
Łukasz Maziarka, Aleksandra Nowak, Andrzej Bedychaj, Maciej Wołczykaleksandrairena.nowak[AT]doctoral.uj.edu.plNO
75
Explaining metabolic stability (pilot study)cheminformatics, explainability, web serviceMetabolic stability is one of several molecular properties optimised in drug design pipelines. It is connected with the duration of desirable therapeutic effect of the drug.

The exact mechanisms of drug metabolism are yet to be discovered. Explaining predictions of machine learning models can give us ideas about which chemical structures are important.

We plan to publish a pilot study in International Journal of Molecular Sciences (IF: 4.56, number of ministerial points: 30). Planned submission date is by the end of the year.
Agnieszka Pocha, Sabina Podlewskaagnieszka.pocha[at]doctoral.uj.edu.plNOThe AI and explainability parts are mostly done. We need a student to build an interactive webpage which will present the existing explanations and generate new ones (using provided python API) for molecules uploaded by users.

We require knowledge on designing and building webpages, as well as standard technologies including HTML, CSS and Javascript.
76
Semi-supervised siamese neural networksiamese neural network, semi-supervised learningSiamese networks are used to label new example when we have a large number of classes: https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf. We focus on designing semi-supervised versions of siamese networks, where we use only small number of labeled examples. Example of such method: https://www.ijcai.org/proceedings/2017/0358.pdf. Marek Śmiejamarek.smieja[AT]ii.uj.edu.plNO
77
Clustering with pairwise constraintsclustering, pairiwise constraints (must-link, cannot-link), semi-superbised learninigClustering is ill-posed problem. Making use of a small number of labeled data, we can specify what we mean by similarity. This is the area of semi-supervised clusterng. We will focus on constructing discriminativ clustering models which take the information about labeled data into account.Marek Śmiejamarek.smieja[AT]ii.uj.edu.plNO
78
Generative model in multi-label casegenerative models, flow models, disentanglement, multi-label classificationWe construct a semi-supervised generative model for partially labeled data. More precisely, every example can be labeled using many binary attributes but we have only access to a few labels. Such a generative model should allow for generating new examples with desired properties (labels). Marek Śmieja, Maciej Wołczyk, Łukasz Maziarkamarek.smieja[AT]ii.uj.edu.plNO
79
Multi-output regression for object trackinggenerative models, image processing, object detection, hypernetworks, clustering, regressionWe consider the problem of predicting object position. As the future is uncertain to a large extent, modeling the
uncertainty and multimodality of the future states is of great relevance. For this purpose a generative model that takes the multimodality and uncertainty into account. The of the project is to to compare with https://openaccess.thecvf.com/content_CVPR_2019/papers/Makansi_Overcoming_Limitations_of_Mixture_Density_Networks_A_Sampling_and_Fitting_CVPR_2019_paper.pdf
Marek Śmieja, Jacek Tabor, Przemysław Spurekmarek.smieja[AT]ii.uj.edu.plNO
80
Auto-encoder with discrete latent spaceauto-encoder, generative models, discrete variables, reparametrization trick, importance samplingCategorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this project we want to an auto-encoder model with discrete latent space, see https://arxiv.org/pdf/1611.01144.pdfMarek Śmieja, Jacek Tabor, Łukasz Struski, Klaudia Balazymarek.smieja[AT]ii.uj.edu.plNO
81
Learning neural networks from missing datamissing data, convolutiona neural networksProject concers the problem of training convolutional neural networks on missing data directly. We plan to extend the following model: https://papers.nips.cc/paper/7537-processing-of-missing-data-by-neural-networks.pdfMarek Śmieja, Łukasz Struski, Jacek Tabormarek.smieja[AT]ii.uj.edu.plNO
82
Pharmacophoric Autoencoderdeep learning, cheminformatics, autoencodersThe plan is to create an autoencoder that works with molecular graphs embedded in 3D space. Such a generative model should be able to generate compounds with some pre-defined 3D constraints. As an input it takes 3D molecular graphs and pharmacophoric features represented as 3D points. The 3D positions are generated by a molecular docking software. The pharmacophoric (3D) constraints are given in the latent space of this model.Tomasz Danel, Łukasz Maziarka, Bartosz Podkanowicz, Artur Kasymov, Sabina Podlewska, Marek Śmieja, Igor Podolaktomasz.danel[AT]ii.uj.edu.plNOPyTorch
83
Non-Gaussian Gaussian Processesdeep learning, meta-learning, few-shot learning, regression, gaussian processes, normalizing flowsGaussian Processes (GPs) have been widely used in machine learning to model distributions over functions, with applications including multi-modal regression, time-series prediction, and few-shot learning. GPs are particularly useful in the last application since they rely on Normal distributions and, hence, enable closed-form computation of the posterior probability function. Unfortunately, because the resulting posterior is not flexible enough to capture complex distributions, GPs assume high similarity between subsequent tasks – a requirement rarely met in real-world conditions. In this work, we address this limitation by leveraging the flexibility of Normalizing Flows to modulate the posterior predictive distribution of the GP, which makes the GP posterior locally non-Gaussian.Marcin Sendera, Jacek Tabor, Aleksandra Nowak, Andrzej Bedychaj, Massimiliano Patacchiola, Tomasz Trzciński, Przemysław Spurek, Maciej Ziębamarcin.sendera[AT]doctoral.uj.edu.plNOPyTorch
84
Better Knowledge Transfer for Non-Gaussian Gaussian Processes (Continuous Object Tracking)deep learning, meta-learning, few-shot learning, object tracking, gaussian processes, normalizing flowsThis project is based on the previous Non-Gaussian Gaussian Processes (NGGP) work. We are going to enhance the NGGPs' flexibility by introducing the full information coming from the support data in Few-Shot Learning setting. The main aim is to enable much faster learning and knowledge transfer from task to task. We also want to apply such a solution for solving the continuous object tracking problem. Specifically, to generate the probability density function of object position over time, assuming the knowledge of the discrete set of past positions. Marcin Sendera, Jacek Tabor, Massimiliano Patacchiola, Przemysław Spurek, Maciej Zięba, Rafał Nowakmarcin.sendera[AT]doctoral.uj.edu.plNOPyTorch
85
Normalizing Flows in Anomaly Detectiondeep learning, generative models, normalizing flows, anomaly detectionAnomaly detection is the problem of identification the abnormal or novel data from the normal ones. We propose to utilize the flexibility of Normalizing Flows models, which could be treated as bijections from the original space to new one latent space. This property allows for introducing the algebraic expressions for the points' density in a latent space explicitly. We utilize various forms for objective functions combined with different flow models to discover anomalies.Marcin Sendera, Jacek Tabormarcin.sendera[AT]doctoral.uj.edu.plNOPyTorch
86
Mol2Image Translation: Generation of High-Content Images Based on Chemical Structuresconvolutional neural networks, computer vision, cheminformaticsHigh-Content Screening is a technology that accelerates drug discovery pipelines by providing a fast method for screening vast numbers of chemical compounds and analysing the output images from fluorescence microscopy (Google "high-content screening", the images are quite cool). In this project we want to create a generative model that transforms chemical structures (SMILES strings or molecular graphs) into images representing phenotypical changes in cellular systems.Tomasz Danel, Adriana Borowatomasz.danel[AT]ii.uj.edu.plNOPyTorch or Tensorflow
87
Interpretable Uncertainty in Molecular Datacheminformatics, machine learning, uncertaintyIn drug discovery pipelines, it is important to accurately predict molecular properties for yet unseen chemical compounds. Of course, the quality of predictions decreases as we depart from the known chemical space. There are some methods for assessing uncertainty of predictions for out-of-domain data, e.g. conformal prediction, but they do not provide us with explanations, why the predictions were marked as uncertain. We plan to create an interpretable uncertainty estimator that indicates "strange" chemical fragments (different from the training set) in the compound.Tomasz Danel, Anna Bielawskatomasz.danel[AT]ii.uj.edu.plNOML basics (scikit-learn, numpy), basic understanding of chemistry is a plus (solid high-school level)
88
Neural Molecular Dockingdeep learning, geometric deep learning, cheminformatics, computer-aided drug designThe goal is to create a one-shot model that predicts docking poses of the molecules inside a binding pocket. Drugs bind to the binding pocket of the target protein to modulate its functions. Typically, this drug-target interactions can be modelled by molecular docking, which predicts binding pose of the compound in 3D space. Molecular docking methods are time-consuming due to the optimization methods used. In the project, we want to develop a neural network that can quickly generate docking poses or even distributions over possible docking poses.Tomasz Danel, Przemysław Spurek, Adam Sułek, Wojciech Sekta, Krzysztof Wierzbickitomasz.danel[AT]ii.uj.edu.plNOPyTorch (preferable PyTorch Geometric)
89
Interpretable Graph Neural Networks Using Prototypes (with Applications in Drug Discovery)deep learning, explainable artificial intelligence, graph neural networks, interpretability, cheminformaticsMolecular graphs are a popular representation of molecules in machine learning. State-of-the-art methods for predicting molecular properties are based on graph neural networks. However, the interpretability of these methods is limited. In the project, we plan to implement prototype-based method for interpreting results of graph neural networks. The idea is borrowed from computer vision, where prototypes were used with convolutional neural networks to provide explanations for the predictions. Each prototype corresponds to some image feature that can be easily recognized by a human eye.Tomasz Danel, Dawid Rymarczyk, Daniel Dobrowolskitomasz.danel[AT]ii.uj.edu.plNOPyTorch (preferable PyTorch Geometric)
90
Aggregation Methods for Molecular Graphsgraph neural networks, cheminformatics, clusteringIn this project, we aim to investigate different approaches of graph aggregation in order to find a method best suited for molecular data. We will also implement a pooling layer that is based on chemical fragments, e.g. functional group, to simplify the molecular graph input, and (hopefully) increase the predictive performance of the network.Tomasz Danel, Ewa Swatowskatomasz.danel[AT]ii.uj.edu.plNOPyTorch (preferable PyTorch Geometric)
91
Contrastive Learning for Graphsgraph theory, graph neural networks, cheminformaticsThe aim of this project is to create a method for unsupervised/self-supervised/semi-supervised pre-training of graph neural networks using contrastive learning. Contrastive methods use graph similarity to learn representation of the input graph data. In some domains such as chemistry, graph edit distance does not describe well dissimilarities between graphs. We plan to create better methods, e.g. which are aligned with the perception of chemists in the molecular domain.Tomasz Daneltomasz.danel[AT]ii.uj.edu.plNOPyTorch or Tensorflow
92
Disentangling world models in reinforcement learningreinforcement learning, world models, generative models, disentanglementWorld models are usually trained in an unsupervised manner and their latent codes do not have any inherent meaning nor interpretability. In this project, we try to build world models (such as Dreamer [1] or IRIS [2]) using techniques from disentangling generative models [3].

[1] https://arxiv.org/abs/1912.01603
[2] https://arxiv.org/abs/2209.00588
[3] https://arxiv.org/abs/2109.09011
Maciej Wołczykmaciej.wolczyk[at]gmail.comNoPython, preferably PyTorch or Jax
93
On-policy continual reinforcement learningcontinual learning, reinforcement learning, transfer learningWe are looking to extend our Continual World benchmark: https://arxiv.org/abs/2105.10919 by introducing and benchmarking new algorithms such as PPOMaciej Wołczykmaciej.wolczyk[at]gmail.comNoPython, preferably TensorFlow 2
94
95
96
97
98
99
100