A | B | C | D | E | F | G | H | I | J | K | L | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | ||||||||||||
2 | GMUM PROJECTS | |||||||||||
3 | ||||||||||||
4 | ||||||||||||
5 | ||||||||||||
6 | ||||||||||||
7 | PROJECT NAME | KEYWORDS | DESCRIPTION | PEOPLE | CONTACT | STUDENT NEEDED | REQUIREMENTS/ADDITIONAL INFO | |||||
8 | Diffiusion models | diffiusion models | Dwa ciekawe projekty: 1. implicit reprezention of diffiution - celem jest zmniejszenie architektury, tak by działał na zwykłych kartach 2. Diffiution na wagach sieci | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
9 | Continual learning | continual learning, meta learning | Projekt w stadium rozwoju opiera się na wykorzystaniu hypernetworków do zadania continual learningu | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
10 | Vide Generation | generative models | Projekt opiera się o reprezentację dżwięku za pomocą sieci | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
11 | Autoregressive/diffusion-based Normalizing Flows | Normalizing Flows, Generative Models, Diffusion Models, Autoregressive Models | The aim of this project is to create a new Normalizing Flow models in a style of diffusion ore autoregressive models. | Marcin Sendera | marcin.sendera[AT]gmail.com | YES | PyTorch | |||||
12 | Continual Few-Shot learning | Continual Learning, Meta-Learning, Few-Shot Learning | The aim of this project is to attack the problem of continual few-shot learning. | Marcin Sendera | marcin.sendera[AT]gmail.com | YES | PyTorch | |||||
13 | Normalizing Flows in Meta-Learning | Normalizing Flows, Generative Models, Meta-Learning, Few-Shot Learning | The aim of this project is to utilize the Normalizing Flows and other Generative models for architectures used for very large Meta-Learning datasets. | Marcin Sendera | marcin.sendera[AT]gmail.com | YES | PyTorch, Tensorflow | |||||
14 | Extended Gaussian Processes in Meta-Learning (few-shot regression) | Normalizing Flows, Generative Models, Gaussian Processes, Meta-Learning, Few-Shot Learning | The aim of this project is to extend the Non-Gaussian Gaussian Processes framework in the sense of flexibility (e.g., adding the conditional case based on support set data). | Marcin Sendera, Tomasz Kuśmierczyk | marcin.sendera[AT]gmail.com | YES | PyTorch | |||||
15 | ||||||||||||
16 | Few shot learning (with hypernetworks) | Few shot learning , Meta Learning | Few-shot learning (FSL), also referred to as low-shot learning (LSL) in few sources, is a type of machine learning problems where the training dataset contains limited information. Few-shot learning aims for Deep learning models to predict the correct class of instances when a small amount of examples are available in the training dataset. | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
17 | Meta learning (continual learning + few shot) | Few shot learning , Meta Learning | Our goal is to vewyfie MAML algorithm in to continual learnign | Przemysław Spurek Jacek Tabor i inni | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
18 | Bayesian Continual learning | continual learning, Bayesian learning, optimization | Recently my academic focus is on continual learning. I am interested in Bayesian learning, optimization. Any reasonable combination (potentially with generative modelling) would be a fantastic project for me. If you don't have any particular ideas, I do have something to offer! | Mateusz Pyla | mateusz.pyla[AT]doctoral.uj.edu.pl. | YES | Nice to have strong maths background.The student needs to know how to program in either tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
19 | GAN + NeRF | generative models for images, hypernetworks | In the project we will use GAN for generating NeRF reprezentaions NeRF https://www.matthewtancik.com/nerf | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
20 | NeRF for modeling 3D faces | generative models for images, hypernetworks | In the project we will use GAN for generating NeRF reprezentaions NeRF https://www.matthewtancik.com/nerf | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
21 | Early exit for visual transformer | early exit, visual transformer | Jacek Tabor, Klaudia Balazy | jacek.tabor[AT]uj.edu.pl | YES | Pytorch | ||||||
22 | contrastive learning with the use of memorization | contrastive learning | Celem pracy jest stworzenie nowego podejścia do contrastive learning, gdzie użyjemy uczenia sieci która ma zapamiętywać losowe etykiety. | Jacek Tabor, Marek Śmieja | jacek.tabor[AT]uj.edu.pl | YES | Pytorch | |||||
23 | Differentiable splitting of batch | early exit | Celem jest zaimplementowanie - potencjalnie w CUDA, funkcji która pozwoli na wygodne rozdzielanie batchu w sposób różniczkowalny, główne zastosowanie widzimy w early exit | Jacek Tabor, Klaudia Balazy | jacek.tabor[AT]uj.edu.pl | NO | Pytorch, CUDA | |||||
24 | Ensemble learning | ensemble, hypernetworks | Celem projekty jest zbudowanie jednej sieci (za pomocą konceptu hypernetwork) która pozwoli generować różnorodne sieci do rozwiązania jednego zadania. | Jacek Tabor, Przemysław Spurek | jacek.tabor[AT]uj.edu.pl | YES | Pytorch | |||||
25 | Early exits in Reinforcement Learning | conditional computation | Our objective is to extend our previous work on early-exiting models to the reinforcement learning domain. | Bartosz Wójcik | bartwojc[AT]gmail.com | YES | PyTorch | |||||
26 | Generowanie obiektów 3D za pomocą NeRF | deep neural networks | Celem projektu jest generowanie wysokiej jakości modeli twarzy luckich za pomocą algorytmu NeRF | Przemysław Spurek | przemyslaw.spurek[AT]uj.edu.pl. | YES | PyTorch | |||||
27 | Analizowanie chmur punktów 3D | deep neural networks | Projek polega na tworzeniu modeli generatywnych dedykowanych do obiektów 3D. | Przemysław Spurek Jacek Tabor i inni | przemyslaw.spurek[AT]uj.edu.pl. | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
28 | Augumenting SGD optimizers with low dimensional 2nd order information | SGD optimization | SGD optimization is currently dominated by 1st order methods like Adam. Augumenting them with 2nd order information would suggest e.g. optimal step size. Such online parabola model can be maintained nearly for free by extracting linear trends of the gradient sequence (arXiv: 1907.07063), and is planned to be included for improving standard methods like Adam. | Jarek Duda | jaroslaw.duda[at]uj.edu.pl | YES | The students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis. | |||||
29 | Hierarchical correlation reconstruction | modelling joint distribution, non-stationarity | HCR combines advantages of machinie learning and statistics: at very low cost offers MSE optimal model for joint distribution of multiple variables as a polynomial, by decomposing statistical dependnecies into (interpretable) mixed moments. It allows to extract and exploit very weak statistical dependencies, not accessible for other methods like KDE. It can also model their time evolution for non-stationary time series e.g. in financial data. This project devolps the method and searches for its further applications (slides: https://www.dropbox.com/s/7u6f2zpreph6j8o/rapid.pdf ). | Jarek Duda | jaroslaw.duda[at]uj.edu.pl | YES | Preferred experience in mathemtics, statistics. | |||||
30 | Conditional generative models | generative models, multi-label learning, learning with partial labels | Celem jest umożliwienie sterowania procesem generowania obiektów w modelach typu StyleGAN (oraz innych). Dotychczas zrealizowane rozwiązanie: https://ojs.aaai.org/index.php/AAAI/article/view/20843.Oprócz opracowania tego typu modeli chcemy je stosować np. przy generowaniu przykładów kontrfaktycznych (counterfactual examples), kóre umożliwiają wyjaśnianie predykcji modeli ML. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
31 | Hierarchical methods | unsupervised learning, self-supervised learning, hierarchical clustering, | Sieci neuronowe osiągają bardzo dobre wyniki w typowych zadanich klasyfikacji czy klastrowania. Interpretacja uzyskiwanych wyników jest jednak ograniczona. W tym projekcie chcemy skupić się na konstrukcji modeli głębokich, które dokonują predykcji podejmując sekwencję decyzji. Intuicyjnie, będziemy się zajmować modelami które budują drzewo/graf decyzyjny. W szczególności pozwala to na lepszą interpretację wyników. Dotychczas zrealizowany model: https://arxiv.org/abs/2107.13214 | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
32 | Learning from tabular data | tabular data, hypernetworks, ensemble learning | Sieci neuronowe osiągają bardzo dobre wyniki w popularnych domenach takich jak obrazy czy teksty. W przypadku danych tabelarycznych (tabular data), które nie posiadają lokalnej struktury płytkkie metody takie jak random forest czy XGBoost osiągają często lepsze wyniki. Celem jest opracowanie modeli sieci neuronowych, które będą mogły być z powodzeniem stosowane dla danych tabelarycznych. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
33 | Contrastive self-supervised learning | self-supervised learning, data augmentation | Model self-supervised learning pozwalają budować reprezentację danych w sposób nienadzorowanmy, która później może być z powodzeniem wykorzystana do klasyfikacji czy klastrowania. Wiele jednak zależy od użytych augmentacji. Jeśli augmentacji zmienia klasę, to trudno później wykorzystać taką reprezentację w klasyfikacji. W prokecie chcemy budować model które budują reprezentację mniej czułą na rodzaj użytych augmentacji. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
34 | Molecular generative models | chemical molecules, generative models | W projekcie chcemy budować modele pozwalające na generowanie molekuł chemicznych. W szczególności chcemy, żeby generowane modeluły spełniały zadanie warunki typu aktywność, rozpuszczalność, ilość pierścieni, itp. | Marek Śmieja | marek.smieja[at]ii.uj.edu.pl | YES | The student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods). | |||||
35 | Segmentation learning with region of confidence | deep neural networks, learning mehods | What if not all objects on single sample are labeled? Can we develop method for learnig deep models in such case? | Krzysztof Misztal | krzysztof.misztal@uj.edu.pl | YES | The students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis. | |||||
36 | Generative Models in Drug Design | deep learning, cheminformatics, generative models | We want to find a way to generate chemical molecules that is useful from the perspective of drug design process. Primarily, the new generative model should be able to follow given structural constraints and generate structural analogs, i.e. molecules similar to previously seen promising compounds. | Tomasz Danel, Łukasz Maziarka | tomasz.danel[AT]ii.uj.edu.pl | YES | PyTorch and Tensorflow | |||||
37 | Adaptations of travel behaviour in agent-based urban mobility model | agent-based, reinforcerment learning, two-sided mobility, urban mobility | You will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular travellers can decide among platforms (Uber or Lyft) or opt-out (use public transport) - based on prevoious experiences. They, however, need to learn which actions are otpimal for them (subjectively). You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents. | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow | |||||
38 | Distributed learning for CAV in two-sided mobility market | agent-based, reinforcerment learning, two-sided mobility, urban mobility | You will simulate two-sided urban mobility market (like Uber or Lyft), where agents get rewarded for their actions. In particular drivers (or Connected Autonomous Vehicles- CAV) can reposition and wait for requests in different part of the city. They, however, need to learn when and where it is efficient to reposition. You will use https://github.com/RafalKucharskiPK/MaaSSim and apply decision modules to the agents. | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow | |||||
39 | A benchmark for comparing early-exiting and conditional computation methods and models | conditional computation, pruning, computationally efficient deep models | The project aims to create a unified benchmark for multiple methods that reduce the inference time of deep learning models. We begin by focusing on early-exiting methods. You task will be to reimplement a conditional computation method from a selected published paper into our common codebase. Conditional computation methods are usually simple to implement and provide significant computational cost savings. We intend to publish the benchmark with the accompanying analysis as a paper in a rank A* conference. We have tasks appropriate for both beginners and people with experience. | Bartosz Wójcik, Maciej Wołczyk | bartwojc[AT]gmail.com | YES | The student needs to know how to program in PyTorch (In order to undestand the present code and implement new methods). | |||||
40 | Ride-pooling heuristics: combinatorial explosion and supervised learning | supervised learning, graph theory, urban mobility, transport | You will apply ride-pooling algorithm which pools travellers (e.g. of Uber) into attractive groups. You will use ExMAS (https://github.com/RafalKucharskiPK/ExMAS) which provides exact analytical search in the combinatorically exploding search space (e.g. for 1000 trip requests there is almost a googol number of possible 5-person groups). You will use this analytical results to train supervised machine learning and explore the ways to make the search space searchable. | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow, optimization, ILP, networkX | |||||
41 | Predicting pooled rides in Chicago (dataset of 5mln trips) | supervised learning, graph theory, urban mobility, transport, XAI | In the dataset of 5mln trips made with Uber in Chicago some of the are pooled - travel together (20%). Which and why. Can we use this dataset to sucesfully predict which of them will be pooled and what factors influence it? This paper scratched the surface, let's go deeper: https://doi.org/10.1177/0361198120915886 | Rafał Kucharski | rafal.kucharski__uj.edu.pl | YES | PyTorch or Tensorflow, pandas, XAI | |||||
42 | Model compression in Transformed-Based Language Models | NLP, model compression, transformers | The goal of this research is to propose complete methodology for compressing large language models based on Transformer architecture. | Klaudia Bałazy | klaudia.balazy[at]doctoral.uj.edu.pl | NO | PyTorch | |||||
43 | Dynamic computations in NLP models | conditional computation, early exit, NLP, transformer | Transformers are the foundation for many well performing neural language processing models. Unfortunately, they require a lot of computational resources which results in slow inference. In this project we aim to leverage conditional computation methods to speed up inference along three axes: depth-wise sparsity (early exits), width-wise sparsity (mixture of experts) and input-wise sparsity (dynamic sequence pruning). Additionally, we would like examine the hypothesis that some data points are easier to process for neural networks. For that purpose, among others, we would like to implement dynamic variant of mixture of experts (MoE) that enables MoE layers to use less resources for easy data points and compare it with difficulty rating extracted from early exit models. | Klaudia Bałazy | klaudia.balazy[at]doctoral.uj.edu.pl | YES | PyTorch | |||||
44 | Convolutional Mixture of Experts | conditional computation, efficient neural networks | Mixture of experts models are currently very popular for Transformer based models. This work intends to test whether the benefits of MoE layers can be transffered to other architecture types such as networks based on convolutional networks. We have tasks appropriate for both beginners and people with experience. | Bartosz Wójcik | bartwojc[AT]gmail.com | YES | PyTorch | |||||
45 | Early exiting while training | conditional computation, efficient neural networks | We want to extend our work on early-exiting to also accelerate the training process. | Bartosz Wójcik | bartwojc[AT]gmail.com | YES | PyTorch | |||||
46 | HuggingMolecules | molecular property prediction, Transformer, open-source | An open-source library for transformer-based molecular property prediction with a simple and unified API that provides the implementation of several state-of-the-art transformers for molecular property prediction. The library is in the development stage, and there are many interesting things to be implemented: novel transformer-based models, pre-training methods, integration with huggingface caching system, Continous Integration, and a few other things. The complexity of the tasks is diverse, ranging from "good first issue" to "game-changer", so basically, anyone can find something suitable :) | Piotr Gaiński, Łukasz Maziarka, Tomasz Danel i Stanisław Jastrzębski | piotr.gainski[at]student.uj.edu.pl | YES | PyTorch | |||||
47 | Extending the Continual World Benchmark | continual learning, reinforcement learning, transfer learning | We are looking to extend our Continual World benchmark: https://arxiv.org/abs/2105.10919 in various ways, such as learning from pixels, implementing new RL algorithms, implementing new continual learning methods, exploring sparse rewards setting. | Maciej Wołczyk | maciej.wolczyk[at]gmail.com | YES | Python, preferably TensorFlow 2 | |||||
48 | NLP: Non-deterministic representation of words using Gaussian distributions | nlp, words represenation | Jacek Tabor, Przemysław Spurek, Klaudia Bałazy | klaudia.balazy[at]doctoral.uj.edu.pl | YES | python, pytorch | ||||||
49 | Continual learning with quick remembering | continual learning, transfer learning | In continual learning we want to understand the phenomenon of catastrophic forgetting - network quickly losing performance at previously learned tasks after encountering new tasks. However, this usually concerns zero-shot forgetting -- what happens if we're allowed to quickly recall the old problem before attempting to solve it? The goal of the project is to investigate how quickly we can recall the "forgotten" knowledge and build a CL method optimized for that | Maciej Wołczyk | maciej.wolczyk[at]gmail.com | YES | PyTorch or Jax | |||||
50 | How to handle data shift during fine tuning RL models? | reinforcement learning, transfer learning, continual learning | Foundation models have delivered impressive outcomes in areas like computer vision and language processing, but not as much in reinforcement learning. It has been demonstrated that fine-tuning on compositional tasks, where certain aspects of the environment may only be revealed after extensive training, is susceptible to catastrophic forgetting. In this situation, a pre-trained model may lose valuable knowledge before encountering parts of the state space that it can handle. The goal of the project is to research and develop methods which could prevent forgetting of the pretrained weights and therefore get better performace by leveraging previous knowledge. Highly recommend section 4.4 Minecraft RL | Maciej Wołczyk, Bartłomiej Cupiał | maciej.wolczyk[at]gmail.com | YES | PyTorch | |||||
51 | Is adjustable augmentation all you need for effective contrastive self-supervised methods? | deep learning, self-supervised methods, augmentations | Contrastive self-supervised learning is a type of unsupervised learning in which a model learns to differentiate between similar and dissimilar data pairs. It involves training the model to maximize the similarity between representations of different augmentations of the same data point while minimizing the similarity between representations of different data points. More and more models appear in the literature, which enhances the architecture and training of contrastive-based models. However, none of them concentrate on augmentation, which has a crucial impact on the resulting representation space. In this project, we concentrate on how augmentation can be used to obtain more robust representations and how to modify augmentation policy during training to train them more effectively. | Bartosz Zieliński et al. | bartosz.zielinski[AT]uj.edu.pl | YES | PyTorch | |||||
52 | Rethinking visual transformer input for more effective training | deep learning, transformer, effective training | Visual transformers are a type of deep neural network architecture designed for computer vision tasks, where image or video data is transformed using self-attention mechanisms. This approach allows the network to selectively focus on different regions or features within the input, leading to improved performance on tasks such as object detection, image classification, and segmentation. The standard visual transformer architecture takes an input image, divides it into a sequence of patches, and processes these patches through multiple layers of self-attention and feedforward networks to extract high-level visual features. Due to a large number of patches, they require significant computational power to be trained. In this project, we will analyze how to modify the transformer input to limit the number of input patches. For this purpose, we will e.g., consider patches with differing resolutions. | Bartosz Zieliński et al. | bartosz.zielinski[AT]uj.edu.pl | YES | PyTorch | |||||
53 | Effective segmentation of high-resolution images | deep learning, image segmentation, high-resolution images | Most image segmentation methods take an image with its original resolution as input and analyze the image pixels based on various characteristics. However, in real-world applications, like satellite maps or whole-slide histopathology, it is impossible to process the whole image due to its high resolution. One of the possible solutions is to process image patches separately, but then we lose context between them. In this project, we analyze different approaches, where we choose the most informative patches from the high-resolution image and process only them through the model. We will test different strategies o choosing as few of those patches as possible while obtaining satisfactory segmentation. | Bartosz Zieliński et al. | bartosz.zielinski[AT]uj.edu.pl | YES | PyTorch | |||||
54 | Weakly-supervised image segmentation | deep learning, image segmentation, weak-supervision | To train image segmentation methods, one typically requires a dataset of labeled images where each pixel or region is labeled with a corresponding class or category. However, obtaining such a well-labeled dataset is almost impossible in real word applications. In fact, in many real-world scenarios, we only have information that the object appears on the image, but no information is given about its segmentation or even location. In this project, we will introduce methods able to train segmentation solutions based only on such weakly-labeled training data. For this purpose, we will start by adapting recent achievements from the partial label learning domain. | Bartosz Zieliński et al. | bartosz.zielinski[AT]uj.edu.pl | YES | PyTorch | |||||
55 | Interpretable flavor in unsupervised representation learning | deep learning, interpretability, self-supervision | Representation learning without class labels can be achieved with self-supervised learning that can be trained with two strategies: contrastive learning or pseudo-labels based on the input data (widely used with transformers architectures). However, obtained representations are hardly interpretable, so we cannot explain what visual properties are represented in the latent space. In this project, we will develop learning methods that encode the input data into a latent representation with semantically meaningful features. We will start with models consisting of prototypical parts as coded features. | Bartosz Zieliński et al. | bartosz.zielinski[AT]uj.edu.pl | YES | PyTorch | |||||
56 | Generalizing interpretable models to an open-world setting | deep learning, continual learning, interpretability | The typical AI-based recognition system predicts class for an image provided by a user, assuming that the image is within the distribution of samples used during training. However, this assumption does not always hold. That is why some of the approaches detect new classes and adjust the model for them. However, they do it in an interpretable way. In this project, we will introduce interpretability into open-world problems like continuous learning and generalized category discovery. We will start by adopting a prototypical parts approach and then consider various cognitive theories to make them user-oriented. | Bartosz Zieliński et al. | bartosz.zielinski[AT]uj.edu.pl | YES | PyTorch | |||||
57 | ||||||||||||
58 | GMUM PROJECTS (Not actively looking for students) | |||||||||||
59 | This projects are not acctively looking for students. However, if you find a project interesting and would like to know more about it, you may contact the person under the "CONTACT" tab. | |||||||||||
60 | ||||||||||||
61 | New representations for molecules | deep learning, cheminformatics | We would like to find new representations for molecules. We could work on both, new embedding methods for molecules or in new input representations for graph neural networks. | Łukasz Maziarka, Tomasz Danel, Agnieszka Pocha | lukasz.maziarka@student.uj.edu.pl | NO | Python, PyTorch, TensorFlow | |||||
62 | Continual Learning with Experience Replay | deep learning, continual learning, experience replay | Catastrophic forgetting occurs in neural networks - when training on a new task, the model completely forgets what it has learned on previous tasks. One of the most efficient ways of combating catastrophic forgetting is experience replay - retraining on small set of examples from previous tasks. However promising, this approach has not been properly explored. This project aims to understand and improve experience replay methods for continual learning. | Maciej Wołczyk, Marek Śmieja, Jacek Tabor | maciej.wolczyk[AT]gmail.com | NO | PyTorch basics | |||||
63 | Conditional Computation for Efficient Inference | deep learning, model compression, conditional computation | Human brain can adaptively change the amount of resources used for the current task. However neural networks constantly use all their available resources for any example. This is not only inconsistent with the biological perspective, but also highly inefficient. We work on approach that uses less resources (layers, neurons) for easy examples and uses all available resources for difficult examples. | Maciej Wołczyk, Bartosz Wójcik, Marek Śmieja, Jacek Tabor | maciej.wolczyk[AT]gmail.com | NO | PyTorch basics | |||||
64 | Hypernetworks Knowledge Distillation | deep learning, teacher-student, computer vision, super resolution | We are using two hypernetworks in teacher-student manner to solve superresolution task. | Maciej Wolczyk, Szymon Rams, Tomasz Danel, Łukasz Maziarka | lukasz.maziarka@student.uj.edu.pl | NO | ||||||
65 | Aspect Level Sentiment Classification | Natural Language Processing, Sentiment Classification, Attention Modeling, Deep Learning | Aspect-level sentiment classification aims to identify the sentiment expressed towards some aspects given context sentences. Recently Hu et al. proposed CAN (https://arxiv.org/pdf/1812.10735.pdf). However, such a mechanism suffers from a major drawback. Specifically, it seems to overly focus on a few frequent words with sentiment polarities and little attention is laid upon low-frequency ones. Our potential solution to the mentioned issue is supervised attention. | Magdalena Wiercioch | mgkwiercioch[AT]gmail.com | NO | ||||||
66 | Molecule Representation for Predicting Drug-Target Interaction | Deep Learning, Representation Learning, Cheminformatics | An essential part of the drug discovery process is predicting drug-target interactions. However, the process is expensive in terms of both time and cost. Obviously, a precisely learned molecule representation in a drug-target interactions model could contribute to developing personalized medicine which will help many patient cohorts. We want to propose a few molecule representations based on various concepts including deep neural networks but not limited to. | Magdalena Wiercioch | mgkwiercioch[AT]gmail.com | NO | ||||||
67 | Deep learning for molecular design | Deep Learning, Cheminformatics, Molecular Design | Searching new molecules in areas such as drug discovery usually starts from the core structures of candidate molecules to optimize the properties of interest. Our present work proposes a graph recurrent generative model for molecular structures. The model incorporates side information into recurrent neural network. | Magdalena Wiercioch | mgkwiercioch[AT]gmail.com | NO | ||||||
68 | Optimization in deep policy gradient methods | Deep Learning, Reinforcement Learning, Optimization | Deep policy gradient methods, which are currently one of the most used tools of reinforcement learning researchers, have some non-obvious optimization properties. We investigate such questions as: why is PPO more efficient than TRPO, how important are various tricks used when implementing PPO, how can we improve the sample efficiency of these methods? | Maciej Wołczyk | maciej.wolczyk[AT]gmail.com | NO | ||||||
69 | Optimization in neural networks without backpropagation and gradients | Deep Learning, Optimization, Bio-inspired | Neuroscientific studies of mechanisms of the learning in the brain suggest that backpropagation (and especially backpropagation through time, as in RNNs) may not be a viable method of learning in neural structures. We want to explore other, more biologically justified approaches to this problem | Jacek Tabor, Aleksandra Nowak, Maciej Wołczyk | maciej.wolczyk[AT]gmail.com | NO | ||||||
70 | Fidelity-Weighted Learning | neural networks, deep neural networks, learning methods | Fidelity weighted learning { it is a student-teacher method for learning from labels of varying quality. | Krzysztof Misztal, Agnieszka Pocha | krzysztof.misztal@uj.edu.pl | NO | ||||||
71 | ||||||||||||
72 | Continual Learning in vision tasks | continual learning, deep networks | Aim of the project is to develop a method for continual learning of deep neural network architectures. Such a model should be able to learn new tasks and not forget the previous one, when some part of the model is shared between tasks and the lowest possible old task resources is kept to prevent the forgetting. | Jacek Tabor, Igor Podolak, Bartosz Zieliński, Łukasz Struski, Dawid Rymarczyk | bartosz.zielinski[AT]uj.edu.pl | NO | ||||||
73 | Neural networks adapting to datasets: learning network size and topology | deep learning, network pruning, neural architectures | We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a standard gradient-based training. The resulting network has the structure of a graph tailored to the particular learning task and dataset. The obtained networks can also be trained from scratch and achieve virtually identical performance. We explore the properties of the network architectures for a number of datasets of varying difficulty observing systematic regularities. | Romuald Janik, Aleksandra Nowak | aleksandrairena.nowak[AT]doctoral.uj.edu.pl | NO | https://arxiv.org/abs/2006.12195 | |||||
74 | Relationship between disentanglement and multi-task learning | deep learning, disentanglement learning, multi-task learning, hard parameter sharing | One of the main arguments behind studying disentangled representation is the assumption that they can be easily reused in different tasks. At the same time finding a joint, adaptable representation of data is one of the key challenges in many multi-task learning settings. The aim of the project is to take a closer look at the relationship between disentanglement and multi-task learning. | Łukasz Maziarka, Aleksandra Nowak, Andrzej Bedychaj, Maciej Wołczyk | aleksandrairena.nowak[AT]doctoral.uj.edu.pl | NO | ||||||
75 | Explaining metabolic stability (pilot study) | cheminformatics, explainability, web service | Metabolic stability is one of several molecular properties optimised in drug design pipelines. It is connected with the duration of desirable therapeutic effect of the drug. The exact mechanisms of drug metabolism are yet to be discovered. Explaining predictions of machine learning models can give us ideas about which chemical structures are important. We plan to publish a pilot study in International Journal of Molecular Sciences (IF: 4.56, number of ministerial points: 30). Planned submission date is by the end of the year. | Agnieszka Pocha, Sabina Podlewska | agnieszka.pocha[at]doctoral.uj.edu.pl | NO | The AI and explainability parts are mostly done. We need a student to build an interactive webpage which will present the existing explanations and generate new ones (using provided python API) for molecules uploaded by users. We require knowledge on designing and building webpages, as well as standard technologies including HTML, CSS and Javascript. | |||||
76 | Semi-supervised siamese neural network | siamese neural network, semi-supervised learning | Siamese networks are used to label new example when we have a large number of classes: https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf. We focus on designing semi-supervised versions of siamese networks, where we use only small number of labeled examples. Example of such method: https://www.ijcai.org/proceedings/2017/0358.pdf. | Marek Śmieja | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
77 | Clustering with pairwise constraints | clustering, pairiwise constraints (must-link, cannot-link), semi-superbised learninig | Clustering is ill-posed problem. Making use of a small number of labeled data, we can specify what we mean by similarity. This is the area of semi-supervised clusterng. We will focus on constructing discriminativ clustering models which take the information about labeled data into account. | Marek Śmieja | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
78 | Generative model in multi-label case | generative models, flow models, disentanglement, multi-label classification | We construct a semi-supervised generative model for partially labeled data. More precisely, every example can be labeled using many binary attributes but we have only access to a few labels. Such a generative model should allow for generating new examples with desired properties (labels). | Marek Śmieja, Maciej Wołczyk, Łukasz Maziarka | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
79 | Multi-output regression for object tracking | generative models, image processing, object detection, hypernetworks, clustering, regression | We consider the problem of predicting object position. As the future is uncertain to a large extent, modeling the uncertainty and multimodality of the future states is of great relevance. For this purpose a generative model that takes the multimodality and uncertainty into account. The of the project is to to compare with https://openaccess.thecvf.com/content_CVPR_2019/papers/Makansi_Overcoming_Limitations_of_Mixture_Density_Networks_A_Sampling_and_Fitting_CVPR_2019_paper.pdf | Marek Śmieja, Jacek Tabor, Przemysław Spurek | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
80 | Auto-encoder with discrete latent space | auto-encoder, generative models, discrete variables, reparametrization trick, importance sampling | Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this project we want to an auto-encoder model with discrete latent space, see https://arxiv.org/pdf/1611.01144.pdf | Marek Śmieja, Jacek Tabor, Łukasz Struski, Klaudia Balazy | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
81 | Learning neural networks from missing data | missing data, convolutiona neural networks | Project concers the problem of training convolutional neural networks on missing data directly. We plan to extend the following model: https://papers.nips.cc/paper/7537-processing-of-missing-data-by-neural-networks.pdf | Marek Śmieja, Łukasz Struski, Jacek Tabor | marek.smieja[AT]ii.uj.edu.pl | NO | ||||||
82 | Pharmacophoric Autoencoder | deep learning, cheminformatics, autoencoders | The plan is to create an autoencoder that works with molecular graphs embedded in 3D space. Such a generative model should be able to generate compounds with some pre-defined 3D constraints. As an input it takes 3D molecular graphs and pharmacophoric features represented as 3D points. The 3D positions are generated by a molecular docking software. The pharmacophoric (3D) constraints are given in the latent space of this model. | Tomasz Danel, Łukasz Maziarka, Bartosz Podkanowicz, Artur Kasymov, Sabina Podlewska, Marek Śmieja, Igor Podolak | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch | |||||
83 | Non-Gaussian Gaussian Processes | deep learning, meta-learning, few-shot learning, regression, gaussian processes, normalizing flows | Gaussian Processes (GPs) have been widely used in machine learning to model distributions over functions, with applications including multi-modal regression, time-series prediction, and few-shot learning. GPs are particularly useful in the last application since they rely on Normal distributions and, hence, enable closed-form computation of the posterior probability function. Unfortunately, because the resulting posterior is not flexible enough to capture complex distributions, GPs assume high similarity between subsequent tasks – a requirement rarely met in real-world conditions. In this work, we address this limitation by leveraging the flexibility of Normalizing Flows to modulate the posterior predictive distribution of the GP, which makes the GP posterior locally non-Gaussian. | Marcin Sendera, Jacek Tabor, Aleksandra Nowak, Andrzej Bedychaj, Massimiliano Patacchiola, Tomasz Trzciński, Przemysław Spurek, Maciej Zięba | marcin.sendera[AT]doctoral.uj.edu.pl | NO | PyTorch | |||||
84 | Better Knowledge Transfer for Non-Gaussian Gaussian Processes (Continuous Object Tracking) | deep learning, meta-learning, few-shot learning, object tracking, gaussian processes, normalizing flows | This project is based on the previous Non-Gaussian Gaussian Processes (NGGP) work. We are going to enhance the NGGPs' flexibility by introducing the full information coming from the support data in Few-Shot Learning setting. The main aim is to enable much faster learning and knowledge transfer from task to task. We also want to apply such a solution for solving the continuous object tracking problem. Specifically, to generate the probability density function of object position over time, assuming the knowledge of the discrete set of past positions. | Marcin Sendera, Jacek Tabor, Massimiliano Patacchiola, Przemysław Spurek, Maciej Zięba, Rafał Nowak | marcin.sendera[AT]doctoral.uj.edu.pl | NO | PyTorch | |||||
85 | Normalizing Flows in Anomaly Detection | deep learning, generative models, normalizing flows, anomaly detection | Anomaly detection is the problem of identification the abnormal or novel data from the normal ones. We propose to utilize the flexibility of Normalizing Flows models, which could be treated as bijections from the original space to new one latent space. This property allows for introducing the algebraic expressions for the points' density in a latent space explicitly. We utilize various forms for objective functions combined with different flow models to discover anomalies. | Marcin Sendera, Jacek Tabor | marcin.sendera[AT]doctoral.uj.edu.pl | NO | PyTorch | |||||
86 | Mol2Image Translation: Generation of High-Content Images Based on Chemical Structures | convolutional neural networks, computer vision, cheminformatics | High-Content Screening is a technology that accelerates drug discovery pipelines by providing a fast method for screening vast numbers of chemical compounds and analysing the output images from fluorescence microscopy (Google "high-content screening", the images are quite cool). In this project we want to create a generative model that transforms chemical structures (SMILES strings or molecular graphs) into images representing phenotypical changes in cellular systems. | Tomasz Danel, Adriana Borowa | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch or Tensorflow | |||||
87 | Interpretable Uncertainty in Molecular Data | cheminformatics, machine learning, uncertainty | In drug discovery pipelines, it is important to accurately predict molecular properties for yet unseen chemical compounds. Of course, the quality of predictions decreases as we depart from the known chemical space. There are some methods for assessing uncertainty of predictions for out-of-domain data, e.g. conformal prediction, but they do not provide us with explanations, why the predictions were marked as uncertain. We plan to create an interpretable uncertainty estimator that indicates "strange" chemical fragments (different from the training set) in the compound. | Tomasz Danel, Anna Bielawska | tomasz.danel[AT]ii.uj.edu.pl | NO | ML basics (scikit-learn, numpy), basic understanding of chemistry is a plus (solid high-school level) | |||||
88 | Neural Molecular Docking | deep learning, geometric deep learning, cheminformatics, computer-aided drug design | The goal is to create a one-shot model that predicts docking poses of the molecules inside a binding pocket. Drugs bind to the binding pocket of the target protein to modulate its functions. Typically, this drug-target interactions can be modelled by molecular docking, which predicts binding pose of the compound in 3D space. Molecular docking methods are time-consuming due to the optimization methods used. In the project, we want to develop a neural network that can quickly generate docking poses or even distributions over possible docking poses. | Tomasz Danel, Przemysław Spurek, Adam Sułek, Wojciech Sekta, Krzysztof Wierzbicki | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch (preferable PyTorch Geometric) | |||||
89 | Interpretable Graph Neural Networks Using Prototypes (with Applications in Drug Discovery) | deep learning, explainable artificial intelligence, graph neural networks, interpretability, cheminformatics | Molecular graphs are a popular representation of molecules in machine learning. State-of-the-art methods for predicting molecular properties are based on graph neural networks. However, the interpretability of these methods is limited. In the project, we plan to implement prototype-based method for interpreting results of graph neural networks. The idea is borrowed from computer vision, where prototypes were used with convolutional neural networks to provide explanations for the predictions. Each prototype corresponds to some image feature that can be easily recognized by a human eye. | Tomasz Danel, Dawid Rymarczyk, Daniel Dobrowolski | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch (preferable PyTorch Geometric) | |||||
90 | Aggregation Methods for Molecular Graphs | graph neural networks, cheminformatics, clustering | In this project, we aim to investigate different approaches of graph aggregation in order to find a method best suited for molecular data. We will also implement a pooling layer that is based on chemical fragments, e.g. functional group, to simplify the molecular graph input, and (hopefully) increase the predictive performance of the network. | Tomasz Danel, Ewa Swatowska | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch (preferable PyTorch Geometric) | |||||
91 | Contrastive Learning for Graphs | graph theory, graph neural networks, cheminformatics | The aim of this project is to create a method for unsupervised/self-supervised/semi-supervised pre-training of graph neural networks using contrastive learning. Contrastive methods use graph similarity to learn representation of the input graph data. In some domains such as chemistry, graph edit distance does not describe well dissimilarities between graphs. We plan to create better methods, e.g. which are aligned with the perception of chemists in the molecular domain. | Tomasz Danel | tomasz.danel[AT]ii.uj.edu.pl | NO | PyTorch or Tensorflow | |||||
92 | Disentangling world models in reinforcement learning | reinforcement learning, world models, generative models, disentanglement | World models are usually trained in an unsupervised manner and their latent codes do not have any inherent meaning nor interpretability. In this project, we try to build world models (such as Dreamer [1] or IRIS [2]) using techniques from disentangling generative models [3]. [1] https://arxiv.org/abs/1912.01603 [2] https://arxiv.org/abs/2209.00588 [3] https://arxiv.org/abs/2109.09011 | Maciej Wołczyk | maciej.wolczyk[at]gmail.com | No | Python, preferably PyTorch or Jax | |||||
93 | On-policy continual reinforcement learning | continual learning, reinforcement learning, transfer learning | We are looking to extend our Continual World benchmark: https://arxiv.org/abs/2105.10919 by introducing and benchmarking new algorithms such as PPO | Maciej Wołczyk | maciej.wolczyk[at]gmail.com | No | Python, preferably TensorFlow 2 | |||||
94 | ||||||||||||
95 | ||||||||||||
96 | ||||||||||||
97 | ||||||||||||
98 | ||||||||||||
99 | ||||||||||||
100 |