-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathUS_internship_EN.html
More file actions
181 lines (140 loc) · 12.4 KB
/
US_internship_EN.html
File metadata and controls
181 lines (140 loc) · 12.4 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
<!DOCTYPE HTML>
<!--
Massively by HTML5 UP
html5up.net | @ajlkn
Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
-->
<html>
<head>
<title>Machine learning</title>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" />
<link rel="stylesheet" href="assets/css/main.css" />
<noscript><link rel="stylesheet" href="assets/css/noscript.css" /></noscript>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-157529848-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-157529848-1');
</script>
<script
src="https://code.jquery.com/jquery-3.3.1.js"
integrity="sha256-2Kok7MbOyxpgUVvAk/HJ2jigOSYS2auK4Pfzbm7uH60="
crossorigin="anonymous">
</script>
<script>
$(function(){
$("#contact_info").load("contact_info.html")
});</script>
</head>
<body class="is-preload">
<!-- Wrapper -->
<div id="wrapper" class="fade-in">
<!-- Header -->
<header id="header">
<a href="index.html" class="logo">portfolio</a>
</header>
<!-- Nav -->
<nav id="nav">
<ul class="links">
<li style="background-color: rgba(39, 39, 39, 0.5);"><a href="US_internship_FR.html">FR/EN</a></li>
<li><a href="index.html">My projects</a></li>
<li><a href="aboutme.html">About me</a></li>
<li><a href="services_EN.html">Services</a></li>
</ul>
<ul class="icons">
<li><a href="https://www.linkedin.com/in/maxime-gillot-6b0920179/" class="icon brands fa-linkedin"><span class="label">Instagram</span></a></li>
<li><a href="https://github.com/Maxlo24" class="icon brands fa-github"><span class="label">GitHub</span></a></li>
<li><a href="https://www.instagram.com/maxime_gt69" class="icon brands alt fa-instagram"><span class="label">Instagram</span></a></li>
<li><a href="https://www.youtube.com/@maximeg3178/videos" class="icon brands alt fa-youtube"><span class="label">youtube</span></a></li>
</ul>
</nav>
<div id="BtnHautPage">
<a href="#main" class="button icon solid solo fa-arrow-up scrolly">Continue</a>
</div>
<!-- Main -->
<div id="main">
<!-- Post -->
<section class="post">
<!-- Introduction -->
<header class="major">
<span class="date">August 2021 - 2022</span>
<h1 >automated dental tools using Deep learning</h1>
<p style="text-align: justify;">
During one year I've been working for <b> the dentistry school of the University of Michigan </b>, Ann Arbor, USA.
The Department of Orthodontics and Pediatric Dentistry has a research group composed of computer scientist and orthodontists.
The laboratory is working in collaboration with the Neuro Image Research and Analysis Laboratories (<a href="https://www.med.unc.edu/psych/research/niral/"><b class="Hlink">NIRAL</b></a>) in North Carolina
as well as <a href="https://www.kitware.com"><b class="Hlink">Kitware .inc</b></a>, a company specialized in the research and development of open-source software in the fields of computer vision, medical imaging, visualization, 3D data publishing, and technical software development.
During this internship <b>I developed and the implemented two machine learning tools </b> (<a href="https://github.com/Maxlo24/ALI_CBCT"><b class="Hlink">ALICBCT</b></a> and <a href="https://github.com/Maxlo24/AMASSS_CBCT"><b class="Hlink">AMASSS</b></a>) to assist expert clinicians in the diagnosis, treatments and research on patients cranio-facial scans.
The developed tools are now deployed on two open-source ecosystem available for free to anyone: the <a href="https://smart-doc.dent.umich.edu/#/"><b class="Hlink">Smart-DOC</b></a>, a web based system, and on <a href="https://www.slicer.org"><b class="Hlink">3D Slicer</b></a>, an open source software package for image analysis and scientific visualization.
This was possible thanks to a collaboration of clinician centers from all over the world Through common effort and collaboration, we have already developed four machine learning tools ready to be used on the <a href="https://github.com/DCBIA-OrthoLab/SlicerAutomatedDentalTools"><b class="Hlink">Slicer Automated Dental Tools</b></a> module.
</p>
<p style="text-align: justify;">
<b>I published a papers as a first author on each of the following tools : <a href="#AMASSS" class="scrolly">AMASSS</a> and <a href="#ALICBCT" class="scrolly">ALICBCT</a> </b> <br>
They are available on my Google scholar profile : <a href="https://scholar.google.com/citations?user=i_tpL0gAAAAJ&hl=en&oi=ao"><b class="Hlink">Maxime Gillot</b></a> <br>
I also wrote a <a href="#book" class="scrolly">book chapter</a> decribing our team's work. <br>
</p>
</header>
<h2> <a id="AMASSS"></a> AMASSS</h2>
<p style="text-align: justify;">
Publicaton at PLOS ONE : <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0275033"><b class="Hlink">Automatic multi-anatomical skull structure segmentation of cone-beam computed tomography scans using 3D UNETR</b></a>
</p>
<p style="text-align: justify;" >
AMASSS stands for Automatic multi anatomical skull structure segmentation. In this case, the segmentation of CBCT scans. Segmentation of medical and dental images is a visual task that aims to identify the voxels of organs or lesions from background grey-level scans.
It represents a prerequisite for medical image analysis. Particularly for challenging dental and craniofacial conditions, such as dentofacial deformities, craniofacial anomalies, and tooth impaction, quantitative image analysis requires efficient solutions to solve the time-consuming and user-dependent task of image segmentation.
<br>
The large field of view CBCT images commonly used for Orthodontics and Oral Maxillofacial Surgery clinical applications require on average to perform detailed segmentation by experienced clinicians: 7 hours of work for full face, 1.5h for the mandible, 2h for the maxilla, 2h for the cranial base (CB), 1h for the cervical vertebra (CV), and 30min for the skin.
The training of the machine learning models allowed to segment the CBCT scans in less than 5 minutes with great accuracy.
</p>
<div style="box-shadow: none;" class="image main"><img src="images/Machine_Learning/AMASSS.png" alt="All results"></div>
<h2> <a id="ALICBCT"></a> ALICBCT</h2>
<p style="text-align: justify;">
Publication : <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/ocr.12642"><b class="Hlink">Automatic landmark identification in cone-beam computed tomography</b></a>
</p>
<p style="text-align: justify;">
ALICBCT stand for <b>Automatic landmark identification</b> in CBCT scans.
The accurate anatomical landmark localization for medical imaging data is a challenging problem due to the frequent ambiguity of their appearance and the rich variability of the anatomical structures.
Landmark detection represents a prerequisite for medical image analysis. It supports entire clinical workflows from diagnosis, treatment planning, intervention, follow-up of anatomical changes, or disease conditions, and simulations.
In this work, we presented a new method inspired by a deep reinforcement learning technique.
<b>The landmark detection task is set up as a behavior classification problem for an artificial agent that navigates through the voxel grid of the image at different spatial resolutions.</b>
</p>
<div class="image main"><div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/879178335?autoplay=1&loop=1&title=0&byline=0&portrait=0&autopause=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div>
<p>Presentation of all the developed tools at Moyers symposium 2022</p>
<div style="box-shadow: none;" class="image main"><img src="images/Machine_Learning/ALICBCT.png" alt="All results"></div>
<!-- <p>Presentation of the project in video: </p>
<div class="image main"><div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/869659254?autoplay=1&loop=1&title=0&byline=0&portrait=0&autopause=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div> -->
<header class="major">
<span class="date" id="1">files</span>
<h1><a id="book"></a> Book chapter</h1>
<embed src="Documents/Book_chapter.pdf" width="900px" height="1150px" />
<!-- <div class="image main"><img src="" alt="Mon CV" /></div> -->
</header>
<div class="image main"><div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/879172223?autoplay=1&loop=1&title=0&byline=0&portrait=0&autopause=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div>
<p style="text-align: justify;">
The virtual agent has a box-shaped field of view around its center point, which it analyzes at every iteration to determine the next direction in which it should move.
By repeating these steps until the agent stops moving, we eventually reach the area of the desired landmark. We perform this process at two different resolutions:
first at a lower resolution to move quickly and approximate the location, and then at a higher resolution to precisely pinpoint the landmark's location.
</p>
<p>An other video to present how the ALI-IOS project works :</p>
<div class="image main"><div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/879174045?autoplay=1&loop=1&title=0&byline=0&portrait=0&autopause=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen></iframe></div><script src="https://player.vimeo.com/api/player.js"></script></div>
</section>
</div>
<!-- Footer -->
<div id="contact_info"></div>
<!-- Copyright -->
<div id="copyright">
<ul><li>© Maxime GILLOT</li><li>CSS & JS: <a href="https://html5up.net">HTML5 UP</a></li></ul>
</div>
</div>
<!-- Scripts -->
<script src="assets/js/jquery.min.js"></script>
<script src="assets/js/jquery.scrollex.min.js"></script>
<script src="assets/js/jquery.scrolly.min.js"></script>
<script src="assets/js/browser.min.js"></script>
<script src="assets/js/breakpoints.min.js"></script>
<script src="assets/js/util.js"></script>
<script src="assets/js/main.js"></script>
</body>
</html>