ABCDEFGHIJKL
1
2
GMUM PROJECTS
3
4
5
6
7
PROJECT NAMEKEYWORDSDESCRIPTIONPEOPLECONTACTSTUDENT NEEDEDREQUIREMENTS/ADDITIONAL INFO
8
Nonlinear Independent Component Analysis (Nonlinear ICA)Independent Component Analysis, Wasserstein Auto-Encoder, Generative Adversarial Network, disentanglement learning Given a mixture of independent components the goal of ICA is to retrieve the original sources. In the case of linear mixing several methods have been proposed over the recent 20 years and are often used as standard tools in data analysis. We are interested in a much harder setting, when the mixing function is nonlinear. Our goal is to develop models for Nonlinear ICA based on deep auto-encoders and potential generative models based on feature disentanglement, with applications in source signal separation or disentanglement learning. Aleksandra Nowak
Przemysław Spurek
Andrzej Bedychaj
Jacek Tabor
Łukasz Maziarka
aleksandrairena.nowak[AT]student.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
9
Interpretable Latent Space Interpolation in Molecular/Graph Generative Models deep learning, cheminformatics, autoencodersThe goal of this project is to interpolate in the latent space of an autoencoder model so that molecules (graphs) from the interpolation path use only "legal" operations. Legal operations in chemistry are those that can be achieved in a chemical reaction. More generally, each interpolation step in graphs should involve one operation such as adding nodes, removing nodes, or modifying connections in the graph.Tomasz Danel, Igor Podolaktomasz.danel[AT]ii.uj.edu.plYESPyTorch
10
Graph Convolution Neural NetworkGraph Convolution Neural NetworkGraph Convolutional Networks (GCNs) have recently become the primary choice for learning from graph-structured data, superseding hash fingerprints in representing chemical compounds. However, GCNs lack the ability to take into account the ordering of node neighbors, even when there is a geometric interpretation of the graph vertices that provides an order based on their spatial positions. To remedy this issue, we propose Geometric Graph Convolutional Network which uses spatial features to efficiently learn from graphs that can be naturally located in space. Our contribution is threefold: we propose a GCN-inspired architecture which (i) leverages node positions, (ii) is a proper generalisation of both GCNs and Convolutional Neural Networks (CNNs), (iii) benefits from augmentation which further improves the performance and assures invariance with respect to the desired properties.Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
11
Rozszerzanie architekturyWAE, VAEProjekt w stadium rozwoju opiera się na rozszerzaniu architektury Autoenkoderów generatywnychPrzemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
12
Adversarial examplesdeep neural networksProjekt w stadium rozwoju opiera się na wykrywaniu przykładów adwersarjanych za pomocą auoencoderów umieszczonych na każdej warstwie.Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
13
Analizowanie chmur punktów 3Ddeep neural networksProjek polega na tworzeniu modeli generatywnych dedykowanych do obiektów 3D.
Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
14
Augumenting SGD optimizers with low dimensional 2nd order information
SGD optimization
SGD optimization is currently dominated by 1st order methods like Adam. Augumenting them with 2nd order information would suggest e.g. optimal step size. Such online parabola model can be maintained nearly for free by extracting linear trends of the gradient sequence (arXiv: 1907.07063), and is planned to be included for improving standard methods like Adam.
Jarek Dudajaroslaw.duda[at]uj.edu.plYESThe students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis.
15
Hierarchical correlation reconstructionmodelling joint distribution, non-stationarityHCR combines advantages of machinie learning and statistics: at very low cost offers MSE optimal model for joint distribution of multiple variables as a polynomial, by decomposing statistical dependnecies into (interpretable) mixed moments. It allows to extract and exploit very weak statistical dependencies, not accessible for other methods like KDE. It can also model their time evolution for non-stationary time series e.g. in financial data. This project devolps the method and searches for its further applications (slides: https://www.dropbox.com/s/7u6f2zpreph6j8o/rapid.pdf ). Jarek Dudajaroslaw.duda[at]uj.edu.plYESPreferred experience in mathemtics, statistics.
16
Multi-label classification with missing (not-at-random) labelsmulti-label classification, missing dataMost of multi-labels classification models, where some labels are missing, ignore the reason why labels are missing. Nevertheless, missing values often contain valuable information, which should not be discarded. In this project we focus on "missing not at random" case and construct the neural network model for this situation.Marek Śmieja, Jacek Tabor et al.marek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
17
Rand-index classificationhierarchical classification, decision trees and neural networks, interpretable MLWe want to use pairwise loss as a classification loss. The goal is to construct a graph/tree structure for hierarchical classification. In constrat to typical entropy and gini index measures used in decision trees, pairwise loss can be traied directly on pairse and not on sets. Given a tree structure, we can use existing labels for defining classification rules. The model will be used in multi-label classification, extreeme classificaiton, interpretable models, etc.Marek Śmieja, Jacek Tabor et al.marek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
18
Semi-supervised learninig on tabular datasemi-supervised classification, real-life datasetsMost of semi-supervised and self-supervised models are designed for image data, where augmentations are straightforward. In this project, we will investigate whether similar ideas can be used for tabular data (medial data, market data, etc.), which are common in many applications (example of such model: https://papers.nips.cc/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-Paper.pdf). For this purpose, we will design specific augmentations for tabular data and apply typical paradigms of semi-supervised learning e.g FixMatch: https://arxiv.org/abs/2001.07685Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
19
Explainable extreeme classificationmulti-class classification, interpretable classificationExtreeme classificaion is a type of multi-class classification where the number of classes is extremely high. We will work on constructing such a model, which, in contrast to other models, will be easy to interpret. In particular, we want to be able to derive key factors for classification.Marek Śmieja, Jacek Tabor et al.marek.smieja[AT]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
20
Uncertainty in semi-supervised and active learningsemi-supervised learning, active learning, uncertainty scoresTypical semi-supervised and active learning models are based on softmax probability model (https://arxiv.org/abs/2001.07685). However, softmax is not the best measure for uncertainty estimation, which is crucial to decide which examples should be labeled. We will examine one-vs-all probability model (based on a sequence of sigmoids) in these settings.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
21
Hypernetworks for data streamshypernetworks, sequential data, time seriesHypernetworks is a recent neural network model, which has been succesfullly applied in image restoration problems, 3D point clouds, etc. Data streams, as a sequence of data points, have similar characteristic. In this project we will investigate whether hypernetworks can be used for this data type. We will compare it with typica recurrect and convolutional networks.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
22
Contrastive learning for multi-label classification with missing labelscontrastive learning, self-supervised learning, ranking loss, missing dataContrastive learning is a promissing idea used in semi-supervised and unsupervised problems (https://github.com/sthalles/SimCLR). It relies on using data augmentations which induces a natural similarity between examples. In this project we will focus on multi-labels case and use similarity between labels as a way of defining similarity.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
23
Conditional hypernetwork for continuous image generationgenerative models for images, hypernetworksConditional image generation relies on generating images from specific classes or with desired features (https://towardsdatascience.com/understanding-conditional-variational-autoencoders-cd62b4f57bf8). It was shown that hypernetworks can be used as a functional image represention and applied in typical (unconditional) generative models (https://arxiv.org/pdf/2011.12026.pdf). We investigate whether hypernetworks can be also used for conditional image generation.Marek Śmiejamarek.smieja[at]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
24
Hypernetworks inspired by biological networkshypernetworks, biological neural networksProjekt ma na celu użycie hypernetworks do uczenia mniejszych, bardziej wyspecjalizowanych sieci. Chcemy się motywować mechanizmami obecnymi w pracy i uczeniu mózguMarek Śmieja, Jacek Tabormarek.smieja[AT]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
25
Interpretable NN in computer visiondeep networks, interpretabilityAim of the project is to develop new methodologies for present the resoning of the neural networks to humans based on the prototype networks.Jacek Tabor, Bartosz Zieliński, Łukasz Struski, Dawid Rymarczykbartosz.zielinski[AT]uj.edu.plYESThe student needs to know how to program in both pytorch (In order to undestand the present code and implement new methods).
26
Deep learning in microbiologydeep learning, microbiology, multiple instance learningAim of the project is to make a algorithm which will be classifying the microbiological datasets, e.g. fungi species or bacteria colony, and presenting the morphological changes to the user between the predictions. Bartosz Zieliński, Dawid Rymarczyk, Adriana Borowa, Adam Piekarczykbartosz.zielinski[AT]uj.edu.plYESThe student needs to know how to program in pytorch (In order to undestand the present code and implement new methods).
27
Information Bottleneck in Neural Networks: HSIC as a Mutual Information EstimatorDeep Learning, Theory of Deep Learning, Information BottleneckInformation Bottleneck theory tries to understand the optimization and generalization of neural networks by examining the properties of mutual information between the network's hidden activations na inputs or outputs. Precise calculation of mutual information in those networks is a difficult task which does not have a definite solution yet. We propose using HSIC and other kernel-based metrics as an approximation of mutual information.Marek Śmieja, Maciej Wolczykmaciej.wolczyk[AT]gmail.comYESBasic PyTorch or Keras skills
28
Segmentation learning with region of confidencedeep neural networks, learning mehodsWhat if not all objects on single sample are labeled? Can we develop method for learnig deep models in such case?Krzysztof Misztalkrzysztof.misztal@uj.edu.plYESThe students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis.
29
Generative Models in Drug Designdeep learning, cheminformatics, generative modelsWe want to find a way to generate chemical molecules that is useful from the perspective of drug design process. Primarily, the new generative model should be able to follow given structural constraints and generate structural analogs, i.e. molecules similar to previously seen promising compounds.Tomasz Danel, Łukasz Maziarkatomasz.danel[AT]ii.uj.edu.plYESPyTorch and Tensorflow
30
Do classifiers know what they don't know?deep networks, classificationGoal of the project is to develop new methods of improving uncertainty estimation in Neural Networks: improving calibration and detecting out-of distribution points (low certinity of prediction for such points). See: https://openaccess.thecvf.com/content_CVPR_2019/papers/Hein_Why_ReLU_Networks_Yield_High-Confidence_Predictions_Far_Away_From_the_CVPR_2019_paper.pdfMarek Śmieja, Jacek Tabor, Krzysztof Misztal, Bartosz Wójcik, Jacek GrelaBartosz.Wojcik@ii.uj.edu.plYESPyTorch or Tensorflow
31
32
33
34
35
36
GMUM PROJECTS (Not actively looking for students)
37
This projects are not acctively looking for students. However, if you find a project interesting and would like to know more about it,
you may contact the person under the "CONTACT" tab.
38
39
New representations for moleculesdeep learning, cheminformaticsWe would like to find new representations for molecules. We could work on both, new embedding methods for molecules or in new input representations for graph neural networks.Łukasz Maziarka, Tomasz Danel, Agnieszka Pochalukasz.maziarka@student.uj.edu.plNOPython, PyTorch, TensorFlow
40
Continual Learning with Experience Replaydeep learning, continual learning, experience replayCatastrophic forgetting occurs in neural networks - when training on a new task, the model completely forgets what it has learned on previous tasks. One of the most efficient ways of combating catastrophic forgetting is experience replay - retraining on small set of examples from previous tasks. However promising, this approach has not been properly explored. This project aims to understand and improve experience replay methods for continual learning.Maciej Wołczyk, Marek Śmieja, Jacek Tabormaciej.wolczyk[AT]gmail.comNOPyTorch basics
41
Conditional Computation for Efficient Inferencedeep learning, model compression, conditional computationHuman brain can adaptively change the amount of resources used for the current task. However neural networks constantly use all their available resources for any example. This is not only inconsistent with the biological perspective, but also highly inefficient. We work on approach that uses less resources (layers, neurons) for easy examples and uses all available resources for difficult examples.Maciej Wołczyk, Bartosz Wójcik, Marek Śmieja, Jacek Tabormaciej.wolczyk[AT]gmail.comNOPyTorch basics
42
Hypernetworks Knowledge Distillationdeep learning, teacher-student, computer vision, super resolutionWe are using two hypernetworks in teacher-student manner to solve superresolution task.Maciej Wolczyk, Szymon Rams, Tomasz Danel, Łukasz Maziarkalukasz.maziarka@student.uj.edu.plNO
43
Aspect Level Sentiment ClassificationNatural Language Processing, Sentiment Classification, Attention Modeling, Deep LearningAspect-level sentiment classification aims to identify the sentiment expressed towards some aspects
given context sentences. Recently Hu et al. proposed CAN (https://arxiv.org/pdf/1812.10735.pdf).
However, such a mechanism suffers from a major drawback. Specifically, it seems to overly focus
on a few frequent words with sentiment polarities and little attention is laid upon low-frequency
ones. Our potential solution to the mentioned issue is supervised attention.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
44
Molecule Representation for Predicting Drug-Target InteractionDeep Learning, Representation Learning, CheminformaticsAn essential part of the drug discovery process is predicting drug-target interactions. However, the
process is expensive in terms of both time and cost. Obviously, a precisely learned molecule
representation in a drug-target interactions model could contribute to developing personalized
medicine which will help many patient cohorts. We want to propose a few molecule representations
based on various concepts including deep neural networks but not limited to.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
45
Deep learning for molecular designDeep Learning, Cheminformatics, Molecular DesignSearching new molecules in areas such as drug discovery usually starts from the core structures of
candidate molecules to optimize the properties of interest. Our present work proposes a graph
recurrent generative model for molecular structures. The model incorporates side information into
recurrent neural network.
Magdalena Wierciochmgkwiercioch[AT]gmail.comNO
46
Optimization in deep policy gradient methodsDeep Learning, Reinforcement Learning, OptimizationDeep policy gradient methods, which are currently one of the most used tools of reinforcement learning researchers, have some non-obvious optimization properties. We investigate such questions as: why is PPO more efficient than TRPO, how important are various tricks used when implementing PPO, how can we improve the sample efficiency of these methods?Maciej Wołczykmaciej.wolczyk[AT]gmail.comNO
47
Optimization in neural networks without backpropagation and gradientsDeep Learning, Optimization, Bio-inspiredNeuroscientific studies of mechanisms of the learning in the brain suggest that backpropagation (and especially backpropagation through time, as in RNNs) may not be a viable method of learning in neural structures. We want to explore other, more biologically justified approaches to this problemJacek Tabor, Aleksandra Nowak, Maciej Wołczykmaciej.wolczyk[AT]gmail.comNO
48
Fidelity-Weighted Learningneural networks, deep neural networks, learning methodsFidelity weighted learning { it is a student-teacher method for learning from labels of varying quality.Krzysztof Misztal, Agnieszka Pochakrzysztof.misztal@uj.edu.plNO
49
Generating Active Compounds Through Predicting Molecular Docking Componentsdeep learning, cheminformatics, computer-aided drug design, molecular simulationIn medicinal chemistry, to assess which chemical compounds will be active towards a given target, a library of promising coumpounds is docked to the target (in simulation). Compounds which dock well are promising drug candidates that will be synthesized. We want to generate compounds that dock well instead of using real activity.Tomasz Danel, Łukasz Maziarka, Igor Podolak, Stanisław Jastrzębskitomasz.danel[AT]ii.uj.edu.plNO
50
Continual Learning in vision tasks continual learning, deep networksAim of the project is to develop a method for continual learning of deep neural network architectures. Such a model should be able to learn new tasks and not forget the previous one, when some part of the model is shared between tasks and the lowest possible old task resources is kept to prevent the forgetting. Jacek Tabor, Igor Podolak, Bartosz Zieliński, Łukasz Struski, Dawid Rymarczykbartosz.zielinski[AT]uj.edu.plNO
51
Neural networks adapting to datasets: learning network size and topologydeep learning, network pruning, neural architecturesWe introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a standard gradient-based training. The resulting network has the structure of a graph tailored to the particular learning task and dataset. The obtained networks can also be trained from scratch and achieve virtually identical performance. We explore the properties of the network architectures for a number of datasets of varying difficulty observing systematic regularities.Romuald Janik,
Aleksandra Nowak
aleksandrairena.nowak[AT]doctoral.uj.edu.plNOhttps://arxiv.org/abs/2006.12195
52
Relationship between disentanglement and multi-task learningdeep learning, disentanglement learning, multi-task learning, hard parameter sharingOne of the main arguments behind studying disentangled representation is the assumption that they can be easily reused in different tasks. At the same time finding a joint, adaptable representation of data is one of the key challenges in many multi-task learning settings. The aim of the project is to take a closer look at the relationship between disentanglement and multi-task learning.
Łukasz Maziarka, Aleksandra Nowak, Andrzej Bedychaj, Maciej Wołczykaleksandrairena.nowak[AT]doctoral.uj.edu.plNO
53
Explaining metabolic stability (pilot study)cheminformatics, explainability, web serviceMetabolic stability is one of several molecular properties optimised in drug design pipelines. It is connected with the duration of desirable therapeutic effect of the drug.

The exact mechanisms of drug metabolism are yet to be discovered. Explaining predictions of machine learning models can give us ideas about which chemical structures are important.

We plan to publish a pilot study in International Journal of Molecular Sciences (IF: 4.56, number of ministerial points: 30). Planned submission date is by the end of the year.
Agnieszka Pocha, Sabina Podlewskaagnieszka.pocha[at]doctoral.uj.edu.plNOThe AI and explainability parts are mostly done. We need a student to build an interactive webpage which will present the existing explanations and generate new ones (using provided python API) for molecules uploaded by users.

We require knowledge on designing and building webpages, as well as standard technologies including HTML, CSS and Javascript.
54
Semi-supervised siamese neural networksiamese neural network, semi-supervised learningSiamese networks are used to label new example when we have a large number of classes: https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf. We focus on designing semi-supervised versions of siamese networks, where we use only small number of labeled examples. Example of such method: https://www.ijcai.org/proceedings/2017/0358.pdf. Marek Śmiejamarek.smieja[AT]ii.uj.edu.plNO
55
Clustering with pairwise constraintsclustering, pairiwise constraints (must-link, cannot-link), semi-superbised learninigClustering is ill-posed problem. Making use of a small number of labeled data, we can specify what we mean by similarity. This is the area of semi-supervised clusterng. We will focus on constructing discriminativ clustering models which take the information about labeled data into account.Marek Śmiejamarek.smieja[AT]ii.uj.edu.plNO
56
Generative model in multi-label casegenerative models, flow models, disentanglement, multi-label classificationWe construct a semi-supervised generative model for partially labeled data. More precisely, every example can be labeled using many binary attributes but we have only access to a few labels. Such a generative model should allow for generating new examples with desired properties (labels). Marek Śmieja, Maciej Wołczyk, Łukasz Maziarkamarek.smieja[AT]ii.uj.edu.plNO
57
Multi-output regression for object trackinggenerative models, image processing, object detection, hypernetworks, clustering, regressionWe consider the problem of predicting object position. As the future is uncertain to a large extent, modeling the
uncertainty and multimodality of the future states is of great relevance. For this purpose a generative model that takes the multimodality and uncertainty into account. The of the project is to to compare with https://openaccess.thecvf.com/content_CVPR_2019/papers/Makansi_Overcoming_Limitations_of_Mixture_Density_Networks_A_Sampling_and_Fitting_CVPR_2019_paper.pdf
Marek Śmieja, Jacek Tabor, Przemysław Spurekmarek.smieja[AT]ii.uj.edu.plNO
58
Auto-encoder with discrete latent spaceauto-encoder, generative models, discrete variables, reparametrization trick, importance samplingCategorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this project we want to an auto-encoder model with discrete latent space, see https://arxiv.org/pdf/1611.01144.pdfMarek Śmieja, Jacek Tabor, Łukasz Struski, Klaudia Balazymarek.smieja[AT]ii.uj.edu.plNO
59
Learning neural networks from missing datamissing data, convolutiona neural networksProject concers the problem of training convolutional neural networks on missing data directly. We plan to extend the following model: https://papers.nips.cc/paper/7537-processing-of-missing-data-by-neural-networks.pdfMarek Śmieja, Łukasz Struski, Jacek Tabormarek.smieja[AT]ii.uj.edu.plNO
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100