GMUM Projects
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

View only
 
 
ABCDEFGHIJKLMNOPQRSTUVWXYZAAAB
1
2
GMUM PROJECTS
3
4
5
6
7
8
PROJECT NAME
KEYWORDS
DESCRIPTION
PEOPLE
CONTACT
STUDENT NEEDED
REQUIREMENTS/ADDITIONAL INFO
9
1Nonlinear Independent Component Analysis (Nonlinear ICA)Independent Component Analysis, Wasserstein Auto-Encoder, Generative Adversarial Network, disentanglement learning Given a mixture of independent components the goal of ICA is to retrieve the original sources. In the case of linear mixing several methods have been proposed over the recent 20 years and are often used as standard tools in data analysis. We are interested in a much harder setting, when the mixing function is nonlinear. Our goal is to develop models for Nonlinear ICA based on deep auto-encoders and potential generative models based on feature disentanglement.Aleksandra Nowak
Przemysław Spurek
Andrzej Bedychaj
Jacek Tabor
Łukasz Maziarka
aleksandrairena.nowak[AT]student.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
10
11
12
13
14
15
16
17
18
Interpolations in Generative ModelsWasserstein Auto-Encoder, Generative Adversarial Network, Fréchet Inception Distance, interpolation optimizationThe aim of the project is to analyze the interpolation capacities of the latent space in deep generative models, such as Generative Adversarial Networks and Wasserstein Auto-Encoders. We are interested in devising a simple and easily implementable solution based on the Fréchet Inception Distance, which would allow to quantify the quality of the samples obtained from the interpolation path. We are also developing algorithms that could directly maximize the proposed measure.Łukasz Struski
Igor Podolak
Jacek Tabor
Krzysztof Maziarz
Aleksandra Nowak
lukasz.struski[AT]uj.edu.pl.NO
19
20
21
22
2
23
24
25
26
27
Graph Convolution Neural NetworkGraph Convolution Neural NetworkGraph Convolutional Networks (GCNs) have recently become the primary choice for learning from graph-structured data, superseding hash fingerprints in representing chemical compounds. However, GCNs lack the ability to take into account the ordering of node neighbors, even when there is a geometric interpretation of the graph vertices that provides an order based on their spatial positions. To remedy this issue, we propose Geometric Graph Convolutional Network which uses spatial features to efficiently learn from graphs that can be naturally located in space. Our contribution is threefold: we propose a GCN-inspired architecture which (i) leverages node positions, (ii) is a proper generalisation of both GCNs and Convolutional Neural Networks (CNNs), (iii) benefits from augmentation which further improves the performance and assures invariance with respect to the desired properties.Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
28
29
30
31
32
33
3
34
35
36
37
38
39
40
Rozszerzanie architekturyWAE, VAEProjekt w stadium rozwoju opiera się na rozszerzaniu architektury Autoenkoderów generatywnychPrzemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
41
42
43
44
4
45
46
47
48
49
Adversarial examplesdeep neural networksProjekt w stadium rozwoju opiera się na wykrywaniu przykładów adwersarjanych za pomocą auoencoderów umieszczonych na każdej warstwie.Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
50
51
52
53
5
54
55
56
57
58
Ensemble of deep neural networksdeep neural networksProjekt w stadium rozwoju opiera się na wykorzystaniu architektury złożonej z kilku sieci naurowych uczących się w jednocześnie.
Przemysław Spurek
Jacek Tabor
i inni
przemyslaw.spurek[AT]uj.edu.pl.YESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
59
60
61
62
6
63
64
65
66
67
Augumenting SGD optimizers with low dimensional 2nd order information
SGD optimization
SGD optimization is currently dominated by 1st order methods like Adam. Augumenting them with 2nd order information would suggest e.g. optimal step size. Such online parabola model can be maintained nearly for free by extracting linear trends of the gradient sequence (arXiv: 1907.07063), and is planned to be included for improving standard methods like Adam.
Jarek DudaYESThe students needs to know basics of tensor flow or pytorch, preferred experience in mathematical analysis.
68
69
70
71
7
jaroslaw.duda[at]uj.edu.pl
72
73
74
75
76
Hierarchical correlation reconstructionmodelling joint distribution, non-stationarityHCR combines advantages of machinie learning and statistics: at very low cost offers MSE optimal model for joint distribution of multiple variables as a polynomial, by decomposing statistical dependnecies into (interpretable) mixed moments. It allows to extract and exploit very weak statistical dependencies, not accessible for other methods like KDE. It can also model their time evolution for non-stationary time series e.g. in financial data. This project devolps the method and searches for its further applications (slides: https://www.dropbox.com/s/7u6f2zpreph6j8o/rapid.pdf ). Jarek Dudajaroslaw.duda[at]uj.edu.plYESPreferred experience in mathemtics, statistics.
77
78
79
80
8
81
82
83
84
85
Semi-supervised siamese neural networksiamese neural network, semi-supervised learningSiamese network służy do labelowania przykładów kiedy mamy bardzo dużo klas: https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf. W projekcie staramy się zrobić wersję semi-supervised, czyli taką gdzie poza przykładami labelowanymi używamy także nielabelowanych. Jedną z takich metod jest: https://www.ijcai.org/proceedings/2017/0358.pdf. Ponadto, chcemy to podejście używać w semi-supervised clustering (ale to dopiero w dalszej kolejności).Marek Śmiejamarek.smieja[AT]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
86
87
88
89
9
90
91
92
93
94
Redukcja fingerprintów w cheminformatycerepresenation learning, cheminformatics, dimensionality reductionZwiązki chemiczne są zwykle reprezentowane jako grafy bądź fingerprinty. Staramy sięzrobić bardziej kompaktową reprezentację. Skupimy się głownie na metoda supervised albo semi-supervised, gdzie do redukcji używamy danych poetykietowanych. Zaczniemy od wzbogacenia modelu autoenkodera o koszxt klasyfikacji.Marek Śmieja, Łukasz Struskimarek.smieja[AT]ii.uj.edu.plYESThe student needs to know how to program in both tensorflow and pytorch (In order to undestand the present code and implement new methods).
95
96
97
98
10
99
100
Loading...