"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:8aa780bf-47b6-4f81-b112-29e23bc06a7d","http://resolver.tudelft.nl/uuid:8aa780bf-47b6-4f81-b112-29e23bc06a7d","Applying Large-Scale Weakly Supervised Automatic Speech Recognition to Air Traffic Control","van Doorn, Jan Laurenszoon (TU Delft Aerospace Engineering)","Sun, Junzi (mentor); Hoekstra, J.M. (graduation committee); Jonk, Patrick (mentor); de Vries, Vincent (graduation committee); Delft University of Technology (degree granting institution)","2023","The application of automatic speech recognition in the air traffic control domain has been researched extensively. However, its primary application remains in the training and simulation of air traffic controllers. This is due to the insufficient performance of automatic speech recognition in specific environments, such as air traffic control, where strong performance and safety requirements are paramount. This study demonstrates how a large-scale, weakly supervised automatic speech recognition model, Whisper, could meet these performance requirements and establish a new approach to air traffic control communication. Fine-tuning Whisper in the air traffic control domain resulted in a word error rate of 13.5% on the ATCO2 dataset and 1.17% on the ATCOSIM dataset. Furthermore, the study reveals that fine-tuning with region-specific data can enhance performance by up to 60% in real-world scenarios.","Automatic Speech Recognition; Air Traffic Control; Whisper; ASR; ATC","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:61e372ab-649a-41f2-8c59-5e8430db73d8","http://resolver.tudelft.nl/uuid:61e372ab-649a-41f2-8c59-5e8430db73d8","Deep Learning-Based Automatic Detection and Classification of Rib Fractures from CT scans","Borren, Noor (TU Delft Mechanical, Maritime and Materials Engineering)","van Walsum, Theo (mentor); Wijffels, Mathieu (mentor); van der Elst, M. (graduation committee); Delft University of Technology (degree granting institution); Universiteit Leiden (degree granting institution); Erasmus Universiteit Rotterdam (degree granting institution)","2023","Introduction: Trauma-induced rib fractures are a common injury, affecting millions of individuals globally each year. Although anteroposterior thoracic radiographs are part of the standard posttraumatic screening, the most sensitive modality, and therefore golden standard for diagnosing rib fractures, is computed tomography (CT). Still, between 19.2% and 26.8% of rib fractures are missed. Another problem encountered in rib fracture treatment management is the large interobserver variability on their taxonomy. This thesis aims to automate rib fracture detection and improve consistency in their classification by developing a Deep Learning (DL) model, using CT data.
Methods: The rib fractures were classified according to the Chest Wall Injury Society (CWIS) taxonomy, evaluating rib fracture’s type, displacement and location. Furthermore, the ribs were numbered from 1 up to and including 12 from cranio-caudal direction. For the detection and three CWIS labels, three classification models of the nnDetection framework were trained. The rib numbering consisted of a trained nnU-Net segmentation model. The four models were combined to obtain the proposed DCRibFrac model.
Experiments and results: The dataset is composed of retrospectively collected and anonymized CT scans of 100 randomly selected patients (1010 rib fractures) who were admitted to the Erasmus MC following blunt chest trauma. On the internal test set, DCRibFrac achieved a detection sensitivity of 77%, precision of 79%, and F1-score of 78%, with a mean false-positives per scan of 2.26. The type labels had the lowest scores, with sensitivities between 17% and 90%. The displacement labels had sensitivities between 43% and 91%. The location labels had the highest scores, with sensitivities between 88% and 96%. The rib number was correct in 72% of the rib fractures when wrong segmentations were excluded.
Conclusion: The proposed DL model automates acute rib fracture detection and reaches a sensitivity that is on par with clinicians. This model is the first, to the authors’ knowledge, to incorporate the CWIS taxonomy and shows its potential for achieving a consistent classification.","Rib Fractures; Automatic detection and classification; Deep Learning; CWIS taxonomy","en","master thesis","","","","","","","","","","","","Technical Medicine","",""
"uuid:5f51b210-c2b5-4093-8ec3-7b6ed5bfc5c7","http://resolver.tudelft.nl/uuid:5f51b210-c2b5-4093-8ec3-7b6ed5bfc5c7","Improving whispered speech recognition using pseudo-whispered based data augmentation","Lin, Chaufang (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Multimedia Computing)","Scharenborg, O.E. (mentor); Dauwels, J.H.G. (graduation committee); Patel, T.B. (graduation committee); Delft University of Technology (degree granting institution)","2023","Whispering, characterized by its soft, breathy, and hushed qualities, serves as a distinct form of speech commonly employed for private communication and can also occur in cases of pathological speech. The acoustic characteristics of whispered speech differ substantially from normally phonated speech and the scarcity of adequate training data leads to low automatic speech recognition (ASR) performance. This project aims to build an ASR system that can recognize both normal and whispered speech and discover which acoustic characteristics of whispered speech have an impact on whispered speech recognition.
In my study, I use signal processing techniques that transform the spectral characteristics of normal speech to those of pseudo-whispered speech, called pseudo-whispered-based data augmentation. I enhance an End-to-End ASR system by incorporating pseudo-whispered speech and state-of-the-art (SOTA) data augmentation methods, speed perturbation and SpecAugment, yielding an 18.2\% relative reduction in word error rate compared to the strongest baseline.
Results for the accented speaker groups in the wTIMIT database show the best results for US English. Further investigation uncovers that the lack of pitch in whispered speech has the largest impact on the performance of whispered speech ASR.
Methods: Two methods for spinal structure segmentation were developed and compared. Both methods used segmentations of bony structures obtained from the TotalSegmentator algorithm. The first method employed morphological dilation and erosion operations to localise the joints and IVD’s, while the second method used a multi-atlas-based method approach with partial atlases and corresponding manually segmented labelmaps. The performance of the methods was assessed on ten manually segmented LDCT’s using sensitivity, and maximum and average Hausdorff distance (HD) for IVD’s and the sacroiliac joints (SIJ) and mean error distance for the smaller joints. The reproducibility of the methods was evaluated using a set of 20 LDCT test-retest images.
Results: The atlas-based method achieved significantly better maximum HD (8.45 (1.80) vs. 9.64 (5.83) (p = 0.002)) and sensitivity (0.79 (0.22) vs. 0.61 (0.30) (p < 0.001)) for all IVD’s combined compared to the morphological method. The atlas-based method also outperformed the morphological method for the facet joints, costovertebral joints and costotransverse joints with a mean error distance of 4.71 mm (2.72) vs 6.90 mm (4.80) (p < 0.001). For the thoracic IVD’s the morphological method showed significantly better average HD (1.48 (1.03) vs. 1.72 (0.53) (p = 0.018)) and maximum HD (6.97 (3.36) vs. 8.22 (1.66) (p < 0.001)) than the atlas-based method. In the reproducibility assessment on the test-retest scans, the atlas-based method outperformed the morphological method for all metrics and structures, with average HD’s well below the voxel resolution (< 2 mm).
Conclusion: We present the first methods for automatic segmentation of the spinal structures on LDCT. The atlas-based method seems to be the most suitable algorithm, achieving average HD’s below the voxel size, and maximum errors below one centimetre. However, it is dependent on accurate segmentation by the TotalSegmentator algorithm. Further research is warranted to investigate the influence of the segmentation results on the extraction of quantitative PET information.","Low-Dose Computed Tomography; Automatic Segmentation; Spinal Joints; Intervertebral Disks; Spondyloarthritis","en","master thesis","","","","","","","","","","","","Technical Medicine","",""
"uuid:df7e3aa8-a384-46cc-8a82-8c028bf53bb3","http://resolver.tudelft.nl/uuid:df7e3aa8-a384-46cc-8a82-8c028bf53bb3","Automatically Generating User-specific Recovery Procedures after Malware Infections","Xu, Cassie (TU Delft Electrical Engineering, Mathematics and Computer Science)","Continella, Andrea (mentor); Verwer, S.E. (mentor); Starink, Jerre (mentor); Durieux, T. (graduation committee); Delft University of Technology (degree granting institution)","2023","Malware poses a serious security risk in today’s digital environment. The defense against malware mainly relies on proactive detection. However, antivirus products often fail to detect new malware when the signature is not yet available. In the event of a malware infection, the common remediation strategy is reinstalling the system. However, the user loses their personal data, and thus it is not an ideal solution.
The academic works on malware remediation focus on system replay and recovery-oriented computing, which relies on heavy monitoring and is not suitable for a normal user’s personal computer. The work from Paleari et al. [31] proposed a remediation methodology that can be used entirely after the infection. They run the malware sample in the sandbox to observe the behavior and generate a revert operation for each action that modifies the system state. However, the limitation of such an approach is unable to deal with the potentially different behaviors in the sandbox and on the real hosts.
In this work, we propose a system that can generate user-specific recovery procedures, without the need of any monitoring in advance. We extend the work from Paleari et al. [31] by combining information from the infected machine. We first extract the environment configuration from the infected computer and configure the same context to the sandbox virtual machine, in order to eliminate the environmental influence on the malware’s behavior. After getting the behavior from the sandbox, we combine forensic evidence to understand the exact actions that happened on the system and generate the user-specific recovery procedures.
We implement a prototype based on Windows 10 and CAPE sandbox and perform an evaluation on 894 malware samples. We are able to recover 51.3% of the changes made by malware, which doubles the recovery rate compared to directly matching the sandbox result. Additionally, our experiment result also demonstrates significantly different actual behavior from the user’s machine and sandbox result. Our system design maximizes the use of information displayed in the sandbox, but the unshown behavior still leads to the biggest limitation of behavior-based recovery.","Malware Remediation; Forensic Analysis; Environment-sensitive Malware; Automatic Recovery","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:8d76a0b5-12f1-4fdd-9787-1c43cbbad79d","http://resolver.tudelft.nl/uuid:8d76a0b5-12f1-4fdd-9787-1c43cbbad79d","Survey of Affect Representation Schemes used in Automatic Affect Prediction for Speech Emotion Recognition: A Systematic Review","Rawat, Aditi (TU Delft Electrical Engineering, Mathematics and Computer Science)","Dudzik, B.J.W. (mentor); Raman, C.A. (mentor); Liem, C.C.S. (graduation committee); Delft University of Technology (degree granting institution)","2023","Automatic affect prediction systems usually assume its underlying affect representation scheme (ARS). This systematic review aims to explore how different ARS are used for in affect prediction systems based on spoken input. The focus is only on the audio input from speakers. Various datasets for speech emotion recognition were also involved in the study to understand the motivation for certain (categorical or dimensional) schemes used for emotions. The basis, popularity, advantages and target affective states were investigated. We used Scopus and Web of Science to extract the papers, focusing on the systems in the field of Computer Science in English language. In summary, our exploration of affect representation schemes in Speech Emotion Recognition (SER) reveals a predominant focus on categorical representations of affect, particularly variations of Ekman's six basic emotions. Behavior and attitude, although rare, are also represented sometimes. Emotions like anger, happiness, and sadness receive the most attention, while the recognition of the neutral state as an emotional state remains controversial. Dimensional affect representation schemes are less common, possibly due to the difficulty in estimating valence solely from audio input. Researchers often combine multiple categorical schemes to accommodate different datasets used in SER systems, aligning the popularity of the schemes with the corresponding datasets. However, issues such as a lack of explanation for chosen categories, interchangeable use of terminology, and a weak psychological foundation for category selection pose challenges in achieving a comprehensive understanding of affect representation in SER research.","Affect Prediction; Affect Representation Scheme; Speech Emotion Recognition; automatic affect recognition","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:a7e4e5b5-af49-4080-b3ce-577e59e27993","http://resolver.tudelft.nl/uuid:a7e4e5b5-af49-4080-b3ce-577e59e27993","Survey of Affect Representation Schemes used in Vision-Based Automatic Affect Prediction: A Systematic Literature Review","Serrano Ruber, Lucia (TU Delft Electrical Engineering, Mathematics and Computer Science)","Dudzik, B.J.W. (mentor); Raman, C.A. (mentor); Liem, C.C.S. (graduation committee); Delft University of Technology (degree granting institution)","2023","In human-human interactions, the majority of information is conveyed through body language, specifically facial expressions. Consequently, researchers have been interested in improving human-computer interactions through developing systems with automatic understanding of body language and facial expressions. This technology is especially useful due to its broad range of applications in fields such as healthcare, education, and safety & security. Vision-based automatic affect recognition (AAR) systems aim to predict a subject’s affective state based on visual input such as image or video. These systems analyze and classify subjects’ facial expressions and body language using affect representation schemes (ARS), most often classified as either categorical or dimensional. This paper explores the current state of ARS used in vision-based AAR through a systematic literature review following PRISMA guidelines. We selected 53 papers from WebOfScience according to our eligibility criteria which included computer science papers written in English proposing a vision-based AAR system targeting single subjects, and excluded studies dealing exclusively with micro-expressions or group affect recognition. Additionally, given the time limitation imposed on this research we excluded papers that were not readily accessible with our TU Delft license, used multimodal input, or did not use a dataset included in our predefined list. For this exploration we specifically look at the schemes used, the popularity and trends of usage, motivations, and psychological basis. From the 53 reviewed papers, all of the papers target utilitarian emotions using at least one discrete ARS. The most commonly used schemes classify affective states into happiness, sadness, fear, anger, surprise, and disgust. While the majority of papers are lacking in providing explicit reasoning for their choices, most ARS are based grounded in psychological theories. Our results show an established norm within this area of research. However, they also evidence a lack of displayed critical thought in the selection of schemes. This oversight limits potential for future AAR research.","Affect Representation Scheme; Automatic Affect Prediction; Systematic Literature Review","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:f1c1087b-3661-46b6-912a-98ebb8ae2550","http://resolver.tudelft.nl/uuid:f1c1087b-3661-46b6-912a-98ebb8ae2550","Investigating the performance of SPEA-II on automatic test case generation","Li, Erwin (TU Delft Electrical Engineering, Mathematics and Computer Science)","Panichella, A. (mentor); Olsthoorn, Mitchell (mentor); Stallenberg, D.M. (mentor); Verwer, S.E. (graduation committee); Delft University of Technology (degree granting institution)","2023","Software testing is an important but time-consuming task, making automatic test case generation an appealing solution. The current state-of-the-art algorithm for test case generation is DynaMOSA, which is an improvement of NSGA-II that applies domain knowledge to make it more suitable for test case generation. Although these enhancements are applicable to other evolutionary algorithms,
no research has been done on how effective other algorithms can function as the base. In this paper, we apply the DynaMOSA modifications to SPEA-II to create a new algorithm, DynaSPEA-II. We conduct an empirical experiment where we evaluate the DynaMOSA enhancements, and directly compare DynaSPEA-II to
DynaMOSA. The algorithms are assessed on a benchmark consisting of 36 diverse JavaScript classes w.r.t. branch coverage. Our results show that adding DynaMOSA enhancements to SPEA-II results in higher coverage in 13.9% of classes, with an average increase of 4.92% for classes where a statistically significant difference was found. DynaSPEA-II performed equally to DynaMOSA, with no statistically significant difference being found between the two.","Search-Based Software Testing; Many Objective Optimisation; automatic testing","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:15a3c657-d304-4177-bbb5-f545d106eab6","http://resolver.tudelft.nl/uuid:15a3c657-d304-4177-bbb5-f545d106eab6","Chewing Detection on Low Power Embedded Systems","Cuţitei, Cristian (TU Delft Electrical Engineering, Mathematics and Computer Science)","Dsouza, V.K.P. (mentor); Pawełczak, Przemysław (mentor); Hung, H.S. (graduation committee); Delft University of Technology (degree granting institution)","2023","Analyzing food consumption patterns can provide valuable insights into the development of obesity and eating disorders. The detection and quantification of chewing strokes are essential to facilitate this analysis. One approach to food intake analysis involves evaluating chewing sounds generated during the eating process. These sounds were recorded by microphones placed to the user’s outer
ear canal. Aside from discovering the most accurate solution, the algorithms used must demonstrate sufficient efficiency to operate on low-power embedded ear-worn hardware. Three algorithms for automated chewing detection were evaluated with the help of two datasets. The first dataset consists of the food intake sounds from the consumption of three types of food. The second dataset consists of environmental noise. The data were manually labeled by recognizing mastication sounds’ visual and audio characteristics. Precision of over 80%
was achieved by all algorithms in the dataset consisting of only chewing sounds. Finally, an efficient solution has been developed to distinguish between speech and chewing sounds.","Embedded; Earable Computing; earable; Automatic","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:efe493e6-e55c-412a-a883-cf9921c28566","http://resolver.tudelft.nl/uuid:efe493e6-e55c-412a-a883-cf9921c28566","Automatic detection of eCAP thresholds: Precision and accuracy of different methods","Schupp, Eleen (TU Delft Mechanical, Maritime and Materials Engineering)","Briaire, J.J. (mentor); Biesheuvel, J.D. (mentor); Hendriks, R.C. (graduation committee); Delft University of Technology (degree granting institution); Universiteit Leiden (degree granting institution); Erasmus Universiteit Rotterdam (degree granting institution)","2023","When a person suffers from severe to profound hearing loss, a cochlear implant (CI) can aid in restoring auditory perception and speech comprehension. To obtain good speech comprehension, fitting of a CI to the user’s specific characteristics is crucial. Fitting can be a time-consuming process which requires the cooperation of a CI user and is dependent on the used methods (e.g., T-level measurements). Besides transmitting electrical stimuli, a CI can also record the response of the auditory nerve fibres to a stimulus. This response is called the electrically evoked compound action potential (eCAP). eCAP responses can be measured objectively without a user’s active cooperation and could potentially aid in the fitting of a CI. For this purpose, the eCAP thresholds are of main interest. eCAP thresholds can be determined manually by a clinical specialist, or automatically by an automatic threshold detection method. Automatic eCAP threshold detection can therefore be of aid in a completely objective and uniform CI fitting method. The goal of this study was to compare different automatic eCAP threshold detection methods (in combination with different averaging methods and different artefact reduction methods) based on the precision and accuracy of these methods.
Five different automatic eCAP threshold detection methods have been examined in this study: sigmoid amplitude growth function (AGF), linear AGF, signal-to-noise ratio (SNR), cross-covariance between adjacent levels and cross-covariance with maximum level. The two different averaging methods that have been examined are standard averaging (SA) and FineGrain averaging (FG), the two different artefact reduction methods are alternating polarity (AP) and forward masking (FM). In total, 20 different combinations have been examined. The success rates of these 20 combinations have been determined, threshold confidence intervals (TCIs) were calculated as a measure of precision and the correlations between eCAP thresholds and T-levels were determined as a measure of accuracy of the different (combinations of) methods.
The combination of FG FM resulted in the highest success rates for different threshold detection methods, and the threshold detection method SNR had the overall highest success rates. A two-way ANOVA revealed that both artefact reduction/averaging method and threshold detection method have a significant effect on the TCIs. The combination of FG FM had the best resultsregarding the TCIs, and the sigmoid AGF threshold detection method was the threshold detection method with the lowest mean TCI. A similar two-way ANOVA was performed for the correlation between eCAP thresholds and T-levels, revealing the same results as for the TCIs that both artefact reduction/averaging method and threshold detection method have a significant effect on the correlation coefficients. FG FM was again the best performing combination, and the sigmoid AGF threshold detection method resulted in the highest correlation coefficients.
Based on these results, it can be stated that the FG FM combination for averaging and artefact reduction was the overall best combination. When comparing the different automatic threshold detection methods, the sigmoid AGF method resulted in eCAP thresholds with the highest precision and accuracy. Future research should focus on obtaining more data, further refinements of the different automatic eCAP threshold detection methods and the use of the determined eCAP thresholds in the clinical fitting of a CI.","Cochlear Implants; eCAP measurements; Automatic threshold detection; Precision; Accuracy","en","master thesis","","","","","","","","","","","","Technical Medicine","",""
"uuid:857658d4-9134-4935-9c28-fef907d9ace1","http://resolver.tudelft.nl/uuid:857658d4-9134-4935-9c28-fef907d9ace1","Floor count from street view imagery using learning-based façade parsing","Dobson, Daniel (TU Delft Architecture and the Built Environment)","Arroyo Ohori, G.A.K. (mentor); Ibrahimli, N. (graduation committee); Delft University of Technology (degree granting institution)","2023","Street view imagery (SVI) is one of the largest (growing) resources in urban analytics. A global close-up of the urban environment, if you will, which is rich in (untapped) information such as floor count. Floor count is useful in many applications, from improving energy consumption calculations to creation of 3D city models without elevation data. So far, efforts to extract floor count from SVI are mainly approached as a classification problem with the use of convolutional neural networks (CNNs). Limitations of this approach include the need of large (manually annotated) datasets, and uncertainty how these models learn to count storeys. Therefore, we aim to develop a method that can be trained on available datasets and determine floor count in a more explainable manner.
In order to make the floor count determination method more transparent, we mimic the row-wise counting of storeys as humans do: by vertically parsing a column of windows (and occasional door). Façade parsing is a common computer vision task that we can solve with deep learning. In this work, we employ the Mask R-CNN framework, that is trained on publicly available datasets, for the detection and segmentation of windows and doors. Then, the vertical distribution of detected / segmented windows and doors is estimated by computing the kernel density estimation function. The floor count is extracted by finding the number of maxima in the function, as the maxima represent the dense areas of windows and doors on a horizontal axis (i.e. storeys). To improve the results, an automatic image rectification is added as pre-processing step that enforces the regularity and repetitive occurrence of windows and doors. The full pipeline thus consists of three stages: 1) automatic image rectification, 2) window and door detection/ segmentation with Mask RCNN, 3) floor count estimation via maxima finding on the kernel density estimation (KDE) function. In addition, a small ""wild"" dataset was created that contains a higher variability in floor count, image quality and architectural styles, which better reflect real world SVI than existing façade datasets.
The floor count performance of the full pipeline was evaluated on the Amsterdam Facade (subset), ECP, TRIMS and ""wild SVI"" datasets. Since floor count annotations were missing, these are manually added. For detection-based data, the best results are an accuracy of 83% and a mean absolute error (MAE) of 0.17. For normalised segmentation-based data, the best results are an accuracy of 80% and a MAE of 0.20. Considering the method is still at its infancy, the results are promising. With further improvements in the pipeline and addition of automatic façade acquisition, the approach can contribute in large scale extraction of floor count information from SVI. To encourage further development, the pipeline prototype, dataset and floor count annotations are open source and will be released on https://github.com/Dobberzoon/Facade2Floorcount.","Floor count; Street View Imagery; façade parsing; facade; Deep Learning; Kernel Density Estimation; floor counting; number of storeys; image rectification; automatic","en","master thesis","","","","","","","","","","","","Geomatics","",""
"uuid:3bbed872-bbef-453b-b2e7-ac30d2864e9c","http://resolver.tudelft.nl/uuid:3bbed872-bbef-453b-b2e7-ac30d2864e9c","Topology Optimization and Physics-Informed Neural Networks for Metamaterial Optics Design","Everingham, Dylan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Vuik, Cornelis (mentor); Möller, M. (mentor); Adam, A.J.L. (mentor); Heemink, A.W. (graduation committee); Heemels, A.N.M. (graduation committee); Delft University of Technology (degree granting institution)","2022","The development of optical metamaterials in recent years has enabled the design of novel optical devices with exciting properties and applications ranging across many fields, including in scientific instrumentation for space missions. This in
turn has led to demand for computational methods which can produce efficient device designs. Traditional optical devices admit a closed-form solution for this inverse design problem. However, in the presence of strong multiple scattering, which is often the case when considering optical metamaterials, the inverse problem becomes ill-posed. As a result, many optimization and machine learning techniques have been applied towards discovering good solutions.
In this MSc thesis project, several of the most promising of these techniques are applied to a specific problem, the discovery of silicon metamaterial lens designs for the CoPILOT high-altitude balloon project. Ultimately, a software tool capable of producing effective and admissible designs is produced and demonstrated.
First, an overview of the CoPILOT design problem is presented. Next, relevant background material topics, including properties of metamaterials and computational methods for simulating them, are covered in some detail. After this, methods used to solve optical design problems in past literature are described and contrasted. Then, a comprehensive explanation of the method developed and used for this project, including important design considerations, is given. The best solutions found using this lens optimization method are shown and compared. Finally, fruitful areas of future work on this topic are listed.","Optics; Machine Learning; Optimization; Space Instrumentation; Topology Optimization; Physics Informed Neural Networks; Automatic Differentiation; Metamaterials","en","master thesis","","","","","","GitHub repository containing all project code and results - https://github.com/deveringham/metalens_optimization","","","","","","Computer Simulations for Science and Engineering (COSSE)","",""
"uuid:b6af07bd-440c-4fd2-98d4-7a5e1848e174","http://resolver.tudelft.nl/uuid:b6af07bd-440c-4fd2-98d4-7a5e1848e174","Cyber-Attack Detection on an Industrial Control System Testbed using Dynamic Watermarking: A Power Grid Application","van den Broek, Geert (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Delft Center for Systems and Control)","Ferrari, Riccardo M.G. (mentor); Keijzer, T. (mentor); Delft University of Technology (degree granting institution)","2022","An Industrial Control System (ICS) is used to monitor and control industrial processes and critical infrastructure, and is therefore crucial to modern society. This makes them attractive targets for malicious cyber-attacks, which have become more advanced and abundant in recent history. To properly defend ICSs from these cyber-attacks, appropriate cyber-defensive mechanisms should be continuously designed and updated, cyber-attack detection mechanisms included. These mechanisms should undergo sufficient testing before being implemented in actual ICSs to minimise unforeseen consequences. Existing literature indicates that Dynamic Multiplicative Watermarking (DMWM) is a promising form of cyber-attack detection, which could improve overall detection performance. Thus far, this technique has not yet been applied to Automatic Generation Control (AGC) (a prominent form of Load Frequency Control (LFC) in power grids) to detect data integrity attacks (specifically scaling and replay attacks).
Ergo, this research aims at testing the performance of DMWM against data integrity attacks on AGC. To perform attack detection, a Luenberger observer it utilised. This observer generates a residual, which is compared to a robustly designed threshold. For the purpose of adequate testing, the HILDA (Hardware-In-the-Loop Detection of Attacks) testbed is designed and constructed. By using this testbed, more realistic scenarios can be simulated than with regular desktop simulations. After verifying the correct construction of the testbed, the DMWM performance is examined both on a desktop simulation environment using MATLAB & Simulink, and on the HILDA testbed. It is shown that the addition of DMWM increases the detection performance in the context of both scaling and replay attacks. For replay attacks, this performance increases notably, while for scaling attacks the improvement is more modest. It is shown that, overall, the attacks are detected more quickly when simulated on the HILDA testbed compared to simulations performed on the MATLAB & Simulink environment. On the other hand, the overall detection ratio was better when simulated on the MATLAB & Simulink environment. This discrepancy in detection performance demonstrates the added value of the HILDA testbed.","Industrial Control System; Dynamic Multiplicative Watermarking; Automatic Generation Control; Load Frequency Control; Data integrity attack; Scaling attack; Replay attack; Hardware-In-the-Loop Testbed","en","master thesis","","","","","","","","","","","","Mechanical Engineering | Systems and Control","","52.00182744704395, 4.3713199611650335"
"uuid:20cc4498-2177-4112-b9d7-e9e2600c625e","http://resolver.tudelft.nl/uuid:20cc4498-2177-4112-b9d7-e9e2600c625e","Automatic Differentiation based Multi-Mode Ptychography: A flexible and highly efficient lensless imaging algorithm","Wang, Yabin (TU Delft Mechanical, Maritime and Materials Engineering; TU Delft Precision and Microsystems Engineering)","Coene, W.M.J.M. (mentor); Westerveld, W.J. (graduation committee); Shao, Y. (graduation committee); Delft University of Technology (degree granting institution)","2022","The scientific community recognizes the critical role played by ptychography in nanoscale imaging. Compared with the conventional imaging, which has high requirements on the manufacturing of optical elements, ptychography, as a computational imaging technique, uses a set of measured intensities of the diffraction patterns to reconstruct the image of the object and hence no imaging system is needed. This technique is especially useful in the short wavelength, e.g. EUV, regime, where manufacturing high quality optical elements such as mirrors is extremely expensive.
Most of the present ptychographic algorithms require the illumination of the object to be both spatially and temporally coherent so that the diffraction pattern can be interpreted as the intensity of the Fourier transform of the field exiting the object. However, the coherence of the sources that produce the EUV radiation often cannot be guaranteed. Therefore, it is crucial to extend the ptychography method to consider partial coherence effects. This requires the use of a flexible propagator which depends on the wavelength to deal with the temporal partial coherence and a modal representation for the spatially partially coherent field. Also, the ambiguity of the reconstructed modes of the probe will be solved by an orthogonalization approach, which could enhance the reproducibility of the results. These methods will be implemented on an existing ptychography platform based on automatic-differentiation and will be validated using both simulation data and experimental data.","computational imaging; Ptychography; Automatic Differentiation","en","master thesis","","","","","","","","","","","","Mechanical Engineering | Precision and Microsystems Engineering","",""
"uuid:df87fbca-7e88-4ea8-858b-3b8f4a194c87","http://resolver.tudelft.nl/uuid:df87fbca-7e88-4ea8-858b-3b8f4a194c87","Bias Mitigation Against Non-native Speakers in Dutch ASR","Zhang, Yixuan (TU Delft Electrical Engineering, Mathematics and Computer Science)","Scharenborg, O.E. (mentor); Patel, T.B. (graduation committee); Delft University of Technology (degree granting institution)","2022","One of the most important problems that needs tackling for wide deployment of Automatic Speech Recognition (ASR) is the bias in ASR, i.e., ASRs tend to generate more accurate predictions for certain speaker groups while making more errors on speech from others. In this thesis, we aim to reduce bias against non-native speakers of Dutch compared to native Dutch speakers. Typically, an important source of bias is insufficient training data. We therefore investigate employing three different data augmentation techniques to increase the amount of non-native accented Dutch training data, i.e., speed and volume perturbation and pitch shift, and using these for two transfer learning techniques: model fine-tuning and multi-task learning, to reduce bias in a state-of-the-art hybrid HMM-DNN Kaldi-based ASR system. Experimental results on read speech and human-computer interaction (HMI) speech showed that although individual data augmentation techniques did not always yield an improved recognition performance, the combination of all three data augmentation techniques did. Importantly, bias was reduced by more than 18% absolute compared to the baseline system for read speech when applying pitch shift data augmentation and multi-task training, and by more than 7% for HMI speech when applying all three data augmentation techniques during fine-tuning, while improving recognition accuracy of both the native and non-native Dutch speech.","automatic speech recognition; bias; data augmentation; transfer learning","en","master thesis","","","","","","","","","","","","Computer Engineering","",""
"uuid:a6b645d2-8d47-44d3-a4ad-1d5a6024f13f","http://resolver.tudelft.nl/uuid:a6b645d2-8d47-44d3-a4ad-1d5a6024f13f","Reinforcement Learning for Flight Control of the Flying V","Völker, Willem (TU Delft Aerospace Engineering)","van Kampen, E. (mentor); Li, Y. (graduation committee); Delft University of Technology (degree granting institution)","2022","Recent research on the Flying V - a flying-wing long-range passenger aircraft - shows that its airframe design is 25% more aerodynamically efficient than a conventional tube-and-wing airframe. The Flying V is therefore a promising contribution towards reduction in climate impact of long-haul flights. However, some design aspects of the Flying V still remain to be investigated, one of which is automatic flight control. Due to the unconventional airframe shape of the Flying V, aerodynamic modelling cannot rely on validated aerodynamic-modelling tools and the accuracy of the aerodynamic model is uncertain. Therefore, this contribution investigates how an automatic flight controller that is robust to aerodynamic-model uncertainty can be developed, by utilising Twin-Delayed Deep Deterministic Policy Gradient (TD3) - a recent deep-reinforcement-learning algorithm. The results show that an offline-trained single-loop altitude controller that is fully based on TD3 can track a given altitude-reference signal and is robust to aerodynamic-model uncertainty of more than 25%.","Reinforcement Learning; Flying V; Deep Deterministic Policy Gradients; TD3; Robust Control; flight control; Automatic Flight Control System; offline learning; fixed-wing; flying wing; altitude control; autopilot","en","master thesis","","","","","","","","","","","","Aerospace Engineering","",""
"uuid:33b125b5-d902-4432-9043-2ceb10f2e53e","http://resolver.tudelft.nl/uuid:33b125b5-d902-4432-9043-2ceb10f2e53e","How could BIM support the digital building permit process in the Netherlands?","Prusti, Maarit (TU Delft Architecture and the Built Environment)","Stoter, J.E. (mentor); Ploeger, H.D. (graduation committee); Delft University of Technology (degree granting institution)","2022","The building permit process in the Netherlands is mostly digitalized, however, there are still some issues. A downsize of information, the manual checking from the municipality, and the duration of the process are some of these issues. To overcome these issues information between 3D building design models, so-called Building Information Modeling (BIM)s, and 3D city models must be exchanged. To exchange interoperable information between BIMs and 3D city models is called integration. In this research, automatic rule checking is performed after the BIM encoded in Industry Foundation Classes (IFC) is converted to a 3D city model encoded in CityJSON. Integration, however, is not as straightforward as it seems. Other researches have been carried out to perform a full integration from 3D city models encoded in CityGML to BIMs encoded in IFC. This is rather complex and so use cases are utilized. In most other researches using the building permit process as a use case, the automatic rule checking is performed in the BIM domain. In this research, a conversion is performed from a BIM encoded in IFC to a 3D city model encoded in CityJSON. The first step in this research is to analyze land use plans and select the most often used rules. These rules are further analyzed on required information to check the rules. For both IFC and CityJSON, the required information for the rules representing the same information as entities in the standards are selected. In the next step, the input models are analyzed on the presence of the entities from the standards. Before the conversion is performed, it is determined which entity will be converted from which input model and whether or not additional information is needed. Finally, the conversion is performed. The 3D city model can be used for rule checking and satisfies the selected rules. The implications of this research are described for the digital building permit process as well as for the integration of the two domains. Guidelines to model correct BIMs for the digital building permit process and further integration are drafted. In conclusion, the tool created in this research works successfully. Automatic rule checking on all the rules in land use plans is technically possible. In practice, automatic rule checking will most likely not take over soon, since rules are still written ambiguously, builders work with 2D drawings mostly, and the Environmental and Planning Act is soon to be established.","BIM; building permit process; automatic rule checking; Netherlands; 3D city models; land use plans","en","master thesis","","","","","","","","","","","","Geomatics","",""
"uuid:ad425136-0e2b-4c52-ab40-709a75e54677","http://resolver.tudelft.nl/uuid:ad425136-0e2b-4c52-ab40-709a75e54677","Effectiveness of Automatic and Semi-Automatic Methods to Collect Common Sense Knowledge","Ezard, François (TU Delft Electrical Engineering, Mathematics and Computer Science)","Houben, G.J.P.M. (graduation committee); Gadiraju, Ujwal (mentor); Yang, J. (mentor); He, G. (mentor); Delft University of Technology (degree granting institution)","2022","Common sense knowledge (CSK) comes naturally to humans, but is very hard for computers to comprehend. However it is critical for machines to behave intelligently, and as such collecting CSK has become a prevalent field of research. Whilst a lot of research has been done to develop CSK acquisition methods, not much work has been done to survey the literature that already exists. Furthermore the surveys that have been done are outdated, and as such there is a clear gap in the literature. This paper will survey the different approaches to CSK acquisition and evaluate their effectiveness, as a way of gauging their real life applicability. It will also compare the current \textit{state of the art} methods, to some previous work to illustrate the progress that has been made and project that into the future. Furthermore this paper will also create a taxonomy categorizing the surveyed literature in order give a better overview of existing methods. Finally, from the literature surveyed it is clear that these methods have made a lot of progress, but aren't quite yet at the same level as human performance. Nevertheless they have become robust enough to be deployed in real applications.","Commonsense Knowledge; Survey; Automatic & Semi-Automatic Methods","en","bachelor thesis","","","","","","","","","","","","Computer Science and Engineering","CSE3000 Research Project",""
"uuid:b078d6b0-4cf2-4d1a-9221-9c0d571db5aa","http://resolver.tudelft.nl/uuid:b078d6b0-4cf2-4d1a-9221-9c0d571db5aa","Automated Medical History Taking in Primary Care: A Reinforcement Learning Approach","GUO, Zhuoran (TU Delft Electrical Engineering, Mathematics and Computer Science)","Al-Ars, Z. (mentor); Jaber, Tareq (mentor); Kitsak, M.A. (graduation committee); Delft University of Technology (degree granting institution)","2022","Online searching for healthcare information has gradually become a widely used internet case. Suppose a patient suffers the symptom but is unsure of the action he needs to take, a self-diagnosis tool can help the patient identify the possible conditions and whether this patient needs to seek immediate medical help. However, the accuracy and quality of the service provided by those self-diagnosis tools are still disappointing and need further improvement. This thesis focuses on an automatic differential diagnosis task with a comprehensive evaluation of reinforcement learning methods. Also, we present a systematic method to simulate medically correct patients records, which integrates a standard symptom modeling approach called NLICE. In this way, we can bridge the gap between limited available patients records and data-driven healthcare methodologies. This project investigates both flat-RL methods and hierarchical RL in an automatic differential diagnosis setting and evaluates the performance of those two kinds of methods on simulated patients records. More specifically, the action space for the differential diagnosis task is inevitably large, so the flat-RL performs relatively poorly in complicated scenarios. The hierarchical RL method can split a complex diagnosis task into smaller tasks: it contains two-level of policy learning, and each low-layer policy imitates one medical specialty. Therefore hierarchical RL method increases the Top 1 success rate from 23.1\% in flat-RL method to 45.4\%.
Besides the advanced policy learning strategy, this thesis explores the ability of NLICE symptom modeling in distinguishing conditions that share similar symptoms. The experimental results experience increases in flat-RL and hierarchical RL models and finally achieve 36.2\% and 71.8\% Top 1 success rates, respectively. To further solve the sparse action space problem in the automatic diagnosis domain, the reward shaping algorithm is implemented in the reward configuration part. The average gained reward of hierarchical RL increases from -3.65 to 0.87. Additionally, we model the general demographic background of patients and utilize contextual information to perform the policy transformation strategy, which eliminates the miss classification problem in highly sex-age related diseases.
This Master thesis investigates the clinician perspective of implementing the Assistant. Literature was reviewed to understand the problem space and the design implications of the enabling technologies. Core concepts in human-AI collaboration such as system transparency and human control were identified to design for hybrid documentation. Also, the perspectives of recording consultations were translated into values for hospitals, clinicians and patients. The findings lead to both mutual benefits and tensions for the clinician-patient relationship and obstacles for implementation.
To contribute to developing the Assistant, user research was carried out in context by shadowing orthopedic surgeons to see their day-to-day workflow and understand the current cycle of clinical documentation. Several surgeons were interviewed to gain more in-depth views about the digital scribe. As synthesis, personas and journey maps were created both for a typical consultation cycle and a daily workflow. From the research phase, a list of user requirements were gathered in order to aid the design phase and future development. Finally, the envisioned user journey is presented in a service blueprint with the developed interface of the Assistant.
Six cyclic settlement models for sand are evaluated to analyse the settlement of automatic stacking crane (ASC) rail tracks at the Rotterdam World Gateway (RWG) container terminal. During Phase 1 of the RWG container terminal settlement of the rail tracks occurred at multiple locations after the ASCs became operational. This has repeatedly led to (unplanned) downtime of parts of the RWG container terminal due to rail track maintenance. Settlements are caused by densification of the sand fill, which is a result of the cyclic load applied by ASCs moving continuously over their rail tracks.
The aim of this research is to contribute to prevent unplanned downtime in Phase 2 of the RWG container terminal due to rail track settlements. Also, reliable settlement predictions can be used to determine the intensity and extent of the ground compaction that are needed to meet the settlement requirement of 20 mm for ASC rail tracks.
The cyclic settlement models, which have been validated to predict the cyclic settlement of rail tracks and shallow foundations, are obtained from literature. The available soil data include CPT’s, boreholes and standard laboratory soil testing. In addition, settlements of the ASC rail tracks in Phase 1 had been measured for a period of almost one year. The cyclic settlement models are evaluated at six different locations, where the sand is medium to very dense and settlements up to 32 mm have been measured. The load is modelled as a quasistatic load equivalent to a vertical stress of 60 to 90 kPa applied to the ballast-sand interface. The model parameters of the cyclic settlement models are determined by correlation, (FE) modelling of the first load cycle, extrapolation and estimation.
The zone of influence was found to reach around 6 m below the shallow foundation. Densification of the sand fill is substantial within the entire zone of influence. The maximum densification was found not to coincide with the minimum void ratio, it is a variable that depends on the initial state of the sand and the loading and soil conditions. After order 104 load cycles densification of the sand was found to become negligible. To meet the settlement requirement for ASC rail tracks the sand fill must consist of sand layers with a minimum and average relative density of at least 65% and 85%, respectively.
Cyclic settlement increases with the number of load cycles, amplitude of the load and extent of the zone of influence and decreases with relative density, stiffness of the sand and volumetric threshold strain. However, correlations used to calibrate the model parameters lead to model predictions that are over- or insensitive to parameters that affect the cyclic settlement. The cyclic settlement predictions of the terminal density model are most reliable and match best with the settlement measurements, for loose and medium dense sand the model predictions underestimate the settlement.
Instead of using correlations to obtain the model parameter values and decrease their uncertainty it is recommended to measure the:
· disturbance of the sand fill underneath the ASC rail tracks due to construction;
· maximum densification of the sand underneath ASC rail tracks in Phase 1 at locations where rail track settlement has stopped, i.e. where the sand reached its maximum densification;
· model parameters that characterise the cyclic densification behaviour of sand in cyclic soil tests.
This will improve the reliability of the cyclic settlement predictions of ASC rail tracks constructed on a sand fill. To validate the cyclic settlement models for ASC rail tracks on sand, measurements of the settlement with depth as function of the number of load cycles are needed.","Geotechnical Engineering; Sand; Cyclic loading; Densification; Compaction; Settlement; Rail tracks; Automatic stacking cranes; Shallow foundations","en","master thesis","","","","","","","","","","","","Geo-Engineering","","51.954252, 3.988612"
"uuid:e5da7c83-7e6c-43d2-80b6-7e9d9dd34706","http://resolver.tudelft.nl/uuid:e5da7c83-7e6c-43d2-80b6-7e9d9dd34706","The application of differentiable programming frameworks to computational fluid dynamics","Vos, Bart (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Delft Institute of Applied Mathematics)","Verlaan, M. (mentor); Nuttall, Jonathan (mentor); Budko, N.V. (graduation committee); Delft University of Technology (degree granting institution)","2021","In recent years many automatic differentiable programming frameworks have been developed in which numerical programs can be differentiated through automatic differentiation (AD). Examples of these frameworks are Theano, TensorFlow and Pytorch. These frameworks are widely used in Machine Learning. AD also finds applications in the field of computational fluid dynamics (CFD). It is used to develop discrete adjoint CFD code for research concerning for instance sensitivity analysis, data assimilation and design optimization. However, the use of the automatic differentiable programming frameworks in the field of CFD is limited. One can find some examples in the literature on how to find a numerical solution to an initial value problem using a differentiable programming framework. In this work it will be clarified how one can implement an semiimplicit
time integration scheme for a staggered grid to simulate the propagation of long waves in water with a free surface in TensorFlow. A main advantage of the automatic differentiable programming frameworks is the user friendly application
programming interface (API) for AD. No research has been conducted to use this API in the field of CFD. In this work an example will be given how one can use TensorFlow for research concerning sensitivity analysis. AD requires a significant allocation of memory on a CPU/GPU when working with fine meshes and/or long simulations and since CPU/GPU memory is finite, the method checkpointing is
proposed to make it feasible to perform sensitivity analysis when working with fine meshes and/or long simulations. Another main advantage of the differentiable programming framework TensorFlow is the use of compute unified device architecture (CUDA) of a NVIDIA GPU in order to perform computations in parallel, which results in a significant reduction in computation time. A Benchmark will be given that indicates the computational efficiency of TensorFlow compared to a loop over grid implementation in NumPy and a Fortran CPU scalar implementation.","Computational Fluid Dynamics (CFD); Tensorflow; Shallow Water Equations; Automatic Differentiation; Sensitivity Analysis; Checkpointing; Adjoint-based optimization; Partial Differential Equations solver; Differentiable programming","en","bachelor thesis","","","","","","","","","","","","Applied Mathematics","",""
"uuid:bedc2f78-3f43-4e3c-aa49-532fdc9d2110","http://resolver.tudelft.nl/uuid:bedc2f78-3f43-4e3c-aa49-532fdc9d2110","An automated approach to estimate car- bon monoxide emissions from steel plants by utilizing TROPOMI satellite measure- ments","Anema, Juliette (TU Delft Civil Engineering & Geosciences)","Basu, S. (mentor); Borsdorff, Tobias (mentor); Delft University of Technology (degree granting institution)","2021","Since the 13th of October 2017, the Tropospheric Monitoring Instrument (TROPOMI) aboard ESA’s Sentinel 5-Precursor (S5-P) satellite enables daily global measurements of carbon monoxide (CO) total column con- centrations at an unprecedented spatial resolution of 7×5.6 km. TROPOMI has the ability to detect distinct pollution plumes, arising from point source emissions, from which emission rates can be derived. We in- vestigate the potential of CO column concentrations observed by TROPOMI to estimate the CO emissions of point sources on an operational level. This study developed a Python framework that for pre-defined point sources automatically detects pollution plumes and from which it estimates CO emissions using a mass bal- ance approach directly from single overpass CO observations. The algorithm is based on concepts from the computer vision to identify the plume and extract the plume center line while respecting the plume orienta- tion. The emission rate is approximated from flux profiles through multiple plume cross-sections following the plume center line. The performance of the developed framework and its potential is demonstrated by the application on 132 identified steel plant facilities over a time period of more than 2.5 years. Currently the lack of accessible and quality-wise good data limits spatial or even temporal comparison of CO emissions from steel plants. Therefore the control and understanding of emission rates could greatly benefit from the proposed approach. In total we obtained 1,774 emission estimates for 97 facilities. Up to 119 measurements per facility are derived where for the majority of the facilities the average number of measurements is around 10. The obtained time series showed large variation in the distribution of measurements over time as well as the emission values itself. For a number of higher emission values, that exceeded up to 2 times the aver- age emission, measured for e.g. the Bhilai Steel Plant, India, the outliers corresponded with interference of another source. Although individual plumes could be identified for two sources (∼35 km apart) in the same Bhilai area, no non-merged plumes were detected for the Schwelgern and Huttenheim sites (∼18 km apart) in Duisburg, Germany. Moreover, we tested the agreement of our measurements with recorded or stated events: i) The emission estimate from the afternoon of the 24th of May 2019, Bhilai site, confirmed the manufacturers statement that the operations had continued that day despite a reported fire in the morning. ii) Our results did not match the significant global drop noted in steel production during the first period of 2020 as a result of the pandemic. The scattered distribution of measurements and their emission values over time seem to limit the representation of a small time frame needed for such analysis. iii) We found a positive correlation with a Pearson Coefficient of 0.76 between the European Pollutant Release Transfer Register (E-PRTR) and our data. For all examined facilities our obtained emissions were greater than reported by the facilities to E-PRTR. This might indicate an underestimation of the data registered. This first evaluation emphasizes the potential of TROPOMI observations to improve our understanding of point source emissions and to compliment existing data such as the E-PRTR. However, to be able to interpret the data from TROPOMI indeed structurally and to develop a reliable validation method extensive data-analysis on plant and area-level is required, especially to be able to rule out interfering factors.","Plumes; TROPOMI; CO; Automatic; Detection","en","master thesis","","","","","","","","","","","","Civil Engineering | Environmental Engineering","",""
"uuid:0177ef43-1e9c-4881-9965-707854559064","http://resolver.tudelft.nl/uuid:0177ef43-1e9c-4881-9965-707854559064","Recognition of Personal Opinions: in Dutch Public Records Requests","van Veen, Carmen (TU Delft Electrical Engineering, Mathematics and Computer Science)","Lofi, C. (mentor); Bozzon, A. (graduation committee); Cockx, J.G.H. (graduation committee); Scholtes, J. (graduation committee); Delft University of Technology (degree granting institution)","2020","The Dutch version of the Public Records Request is named the ‘Wet Openbaarheid van Bestuur’ (Wob) , which provides the public with the right to request access to records from any governmental institution. The government has the obligation to provide information about policy and the execution of policy; however individuals who wish to obtain more detailed information, can request that a governmental body publicly disclose certain information. Certain steps that require large amounts of time continue to exists when processing such a request. One of these tasks is described in this thesis, namely the redaction of personal opinions within internal deliberations. The goal of this thesis was to investigate the possibility of automatically recognising personal opinions within internal deliberations in order to speed up the process of handling a Wob-request.","Automatic Recognition; Personal Opinions; Dutch Public Record Requests","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:7d56b70a-4f2f-4f87-8876-b8c5ff09fb36","http://resolver.tudelft.nl/uuid:7d56b70a-4f2f-4f87-8876-b8c5ff09fb36","operational transitions in railway infrastructure","de Hek, Maurits (TU Delft Civil Engineering and Geosciences)","Goverde, R.M.P. (mentor); Baggen, J.H. (mentor); de Bruijne, M.L.C. (mentor); Bokhorst, F. (graduation committee); Delft University of Technology (degree granting institution)","2020","Technological innovations in the Dutch railway sector, such as ERTMS and the 3kV power supply system make it possible to increase the efficiency, effectivity and speed of the railway system in the upcoming decades. The implementation of systems such as ERTMS and 3kV is a lengthy process. The new systems will therefore operate alongside the old systems (ATB-EG and 1.5kV) for a long period of time. Between the old and the new system, operational transitions are required. The number of operational will therefore increase in the coming years. Some of these transitions from one system to another system turn out to be prone to failures and disruptions. When an operational transition fails, this can cause considerable delays on the railway network. Especially when multiple operational transitions are close together and occur nearly simultaneously, the risk of failures in one of these transitions is present. When operational transitions are combined, a failure in one transition can have large effects on another operational transition that is about to take place. Operational transitions are defined by two characteristics. Firstly, they are physical locations in the rail infrastructure. Secondly, operational transition require the driver to switch between two systems, or to change his/her behaviour significantly. The aim of this thesis is to investigate the effect of operational transitions on the reliability of train operations. The magnitude of the impact of operational transitions on the reliability of train operations is often unclear. This thesis contributes to scientific literature by examining a sizable number of operational transition types and providing an initial insight into the effect of these transitions on railway operations. For society in general and for ProRail in particular, the findings of this research can contribute to a better understanding of operational transitions. Using the acquired knowledge, ProRail can take measures that increase the reliability of operational transition passages, thereby increasing the punctuality of train services.","Train operators; Railway infrastructure; Human factors; Transitions; Train service disruptions; Automatic Train Protection System; Vertical track alignment; Train dispatching; Power supply systems","en","master thesis","","","","","","","","","","","","Transport, Infrastructure and Logistics","",""
"uuid:166afe40-198a-4fa7-8a72-b9c08faf46be","http://resolver.tudelft.nl/uuid:166afe40-198a-4fa7-8a72-b9c08faf46be","Strengthen the adaptability of the ERTMS implementation","Westerhuis, Gijsbert (TU Delft Civil Engineering and Geosciences)","Goverde, R.M.P. (mentor); Veeneman, W.W. (mentor); Quaglietta, E. (mentor); Hoeberigs, G.M. (mentor); Delft University of Technology (degree granting institution)","2020","The number of operational rail corridors equipped with ERTMS is increasing throughout Europe. The implementation of this critical safety system is planned to take several decades. However, ERTMS is a complex system that evolves continuously increasing the risk of using outdated parts and components. Therefore, adaptability is required for an efficient process. Adaptability is the ability of a system to meet technological or functional changes without requiring structural modifications or replacements. This paper identifies factors that influence adaptability and researches critical issues for future adaptability of ERTMS. With these factors and issues, solutions are proposed that are validated in a use case and integrated in a strategy that strengthens adaptability of ERTMS for future operational needs. The main takeaways of this strategy is the need for technical modularity and a balanced stakeholder involvement in the implementation process.","ERTMS; adaptability; future-proof; innovation; SysML; GSM-R; FRMCS; ETCS; Railway infrastructure; Automatic Train Protection System","en","master thesis","","","","","","","","","","","","Transport, Infrastructure and Logistics","",""
"uuid:c6d43d2d-cb33-450f-b59a-636ec07bc34a","http://resolver.tudelft.nl/uuid:c6d43d2d-cb33-450f-b59a-636ec07bc34a","Automatic Depth Matching for Petrophysical Borehole Logs","Garcia Manso, A. (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Microelectronics)","Leus, G.J.T. (mentor); Przybysz-Jarnut, Justyna (mentor); Epping, Willem (graduation committee); Isufi, E. (graduation committee); Delft University of Technology (degree granting institution)","2020","In the oil and gas industry a crucial step for detecting and developing natural resources is to drill wells and measure miscellaneous properties along the well depth. These measurements are used to understand the rock and hydrocarbon properties and support oil/gas field development. The measurements are done at multiple times and using different tools. This introduces multiple disturbances
which are not related to physical properties of rocks or fluids themselves, and should be tackled before data is used to build subsurface models or take decisions. One important source of this disturbances is depth misalignment and in order to compare different measurements care must be taken to ensure that all measurements (log curves) are properly positioned in depth. This process is called depth matching. In spite of multiple attempts for automating this process it is still mostly done manually. This thesis addresses the automation problem and proposing a model based approach to solve it using Parametric Time Warping (PTW).
Based on the PTW, a parameterised warping function that warps one of the curves is assumed and its parameters are determined by solving an optimization problem maximizing the cross-correlation between the two curves. The warping function is assumed to have the parametric form of a piecewise linear function in order to accommodate the linear shifts that take place during the measurement process. This method, combined with preprocessing techniques such as an offset correction and low pass filtering, gives a robust solution and can correctly align the most commonly accruing examples. Furthermore, the methodology is extended to depth match logs with severe distortion by applying the technique in an iterative fashion. Several examples are given when developed algorithm is tested on real log data supplemented with the analysis of the computational complexity this method has and the scalability to larger data sets.","Automatic depth matching; Piecewise Linear Time Warping; PLTW; Depth matching; Warping; Curve alignment; dynamic time warping; Borehole log; Petrophysics; Parametric; parametric warping; parametric time warping; time warping; depth warping","en","master thesis","","","","","","","","","","","","Electrical Engineering","",""
"uuid:aa2e6e66-5dac-4a8b-a247-63b94974b211","http://resolver.tudelft.nl/uuid:aa2e6e66-5dac-4a8b-a247-63b94974b211","Word recognition in a model of visually grounded speech: An analysis using techniques inspired by human speech processing research","Scholten, J.S.M. (TU Delft Electrical Engineering, Mathematics and Computer Science)","Scharenborg, O.E. (mentor); Merkx, Danny (mentor); Tintarev, N. (graduation committee); Oertel Genannt Bierbach, C.R.M.M. (graduation committee); Delft University of Technology (degree granting institution)","2020","A Visually Grounded Speech model is a neural model which is trained to embed image caption pairs closely together in a common embedding space. As a result, such a model can retrieve semantically related images given a speech caption and vice versa. The purpose of this research is to investigate whether and how a Visually Grounded Speech model can recognise individual words. Literature on Word Recognition in hu- mans, Automatic Speech Recognition and Visually Grounded Speech models was evaluated. Techniques used to analyse human speech processing, such as gating and priming, were taken as inspiration for the design of the experiments used in this thesis. Multiple aspects of words recognition were investigated through three experiments. Firstly, it was investigated whether the model can recognise individual words. Secondly, it was investigated whether the model can recognise words from a partial sequence of its phonemes. Thirdly, it was investigated how word recognition is affected by contextual information. The experiments show that the model is able to recognise words while not being supervised for that task, and that factors such as word frequency, the length of a word and the speaking rate affect word recognition. Furthermore, the experiments reveal that words can be recognised from a partial input of a word’s phoneme sequence as well, and that recognition is negatively influenced by word competition from the word initial cohort. Furthermore, the word recognition in context experiment reveals that contextual information can enhance the recognition of words which are recognised less well.","Visually Grounded Speech; Recurrent Neural Network; Flickr8k; Automatic Speech Recognition; Word Recognition","en","master thesis","","","","","","","","","","","","Computer Science","",""
"uuid:971cdcde-1a9a-490a-b72c-8490f2f668ed","http://resolver.tudelft.nl/uuid:971cdcde-1a9a-490a-b72c-8490f2f668ed","ERTMS/ETCS Hybrid Level 3 and ATO: A simulation based capacity impact study for the Dutch railway network","Vergroesen, Ruben (TU Delft Civil Engineering & Geosciences; TU Delft Transport and Planning)","Goverde, R.M.P. (mentor); Quaglietta, E. (graduation committee); Baggen, J.H. (graduation committee); Bartholomeus, Maarten (graduation committee); Pot, Alwin (graduation committee); Delft University of Technology (degree granting institution)","2020","In the 2020 timetable the Dutch legacy signalling and automatic train protection systems (ATB/NS’54) are operating at maximum capacity in the busiest parts of the network. With future expansions of services in mind, the capacity provided by the legacy systems could be considered insufficient. The planned implementation of the ERTMS/ETCS Level 2 system might still not provide sufficient capacity in these locations. This thesis investigates how the technological developments of the Hybrid Level 3 and ATO (GoA2) systems could contribute to the capacity of the railway network with the objective to provide a solution for the busiest part of the Dutch railway network. The capacity effects of these systems are analysed through a simulation case study on the SAAL corridor on the Dutch railway network, using a variant of the 2030 timetable. Through timetable compressions, the capacity that the different systems (ATB/NS’54, ERTMS/ETCS Level 2, Hybrid Level 3 and ATO) provide in different configurations. A further comparison between the human driver behaviour and automatic systems is made to determine where possible capacity benefits could be gained from. The thesis was partly used as an opportunity to further develop capacity modelling methods for ERTMS/ETCS and ATO based on the constraints provided by the systems and modelling software. The timetable compressions provided an overview of each separate system and configuration step and their contribution to the capacity. The results of the timetable compressions indicated that the ERTMS/ETCS Hybrid Level 3 and ATO systems provided 4 main steps for increasing capacity. 1. Improving braking behaviour through the use of ERTMS/ETCS braking curves instead of the legacy system 2. The use of shorter block sections through the ERTMS/ETCS Hybrid Level 3 system 3. An optimisation of driving/braking behaviour through the ATO system 4. The possibility to limit headway buffer times due to homogenisation of train movements brought by the ATO Sizable variations in the timetable compressions on the different part of the corridor caused by variations in the service pattern and infrastructure configuration indicated that the effectiveness of these systems will vary based on the location. This shows the importance for infrastructure managers of investigating these systems on a case by case basis when using them to increase capacity on their respective networks. The results from the case study indicate that the ERTMS/ETCS Hybrid Level 3 and ATO systems could provide a possible alternative for costly infrastructure expansion projects to increase capacity on the Dutch railway network. Trade-offs will need to be made between capacity, costs, and robustness to determine the optimal system configuration.","ERTMS/ETCS; ATO; Capacity; Automatic Train Operation; Hybrid Level 3","en","master thesis","","","","","","","","","","","","Civil Engineering | Transport and Planning","",""
"uuid:dffc21d9-936c-4286-9611-a2a846cfc416","http://resolver.tudelft.nl/uuid:dffc21d9-936c-4286-9611-a2a846cfc416","Automatic Train Operation over Legacy Automatic Train Protection Systems: A Case Study on the Groningen-Buitenpost Line","Buurmans, Kristijn (TU Delft Civil Engineering and Geosciences; TU Delft Transport and Planning)","Goverde, Rob (mentor); Quaglietta, Egidio (mentor); Papadimitriou, Eleonora (mentor); Schaafsma, Alfons (mentor); Delft University of Technology (degree granting institution)","2019","Railways are facing the challenge to simultaneously increase the capacity and the operational performance of their network. Automatic train operation (ATO) can be one of the technologies to increase the capacity of the railway network. The specifications for ATO over the European Train Control System (ETCS) automatic train protection system have been defined. However, testing is taking place over legacy automatic train protection systems , such as ATBNG, as well. ATO requires information from the automatic train protection system. The goal of this master thesis is to determine the data gap between ETCS and ATBNG in relation to ATO. First a generic ATO model is developed from literature. This model presents the information required and produced by ATO. The model is used as a framework for the analysis of ATO over ETCS and ATO over ATBNG. This analysis resulted in a conceptual model of both ATO over ETCS and ATO over ATBNG. The conceptual model shows that ATBNG can provide the required information if the ATBNG operational envelope is used as the relevant safety envelope. However, currently the operational envelope of the NS’54 signalling system is the relevant safety envelope. Additional information is required to allow ATO to determine the NS’54 operational envelope. Furthermore, ATBNG is not capable of presenting ATO information to the train driver. A new ATO DMI is required. Moreover, the ProRail traffic management system is not yet capable of providing ATO with the required information for both ATO over ETCS and ATO over ATBNG. To validate the conceptual ATO over ATBNG model three case studies have been worked out. The study performed for this master thesis is an exploratory study, therefore validation of the findings and assumptions is required.","Automatic Train Operation; ETCS; ATBNG; ATO; Automatic Train Protection System; NS'54","en","master thesis","","","","","","","","","","","","Civil Engineering | Transport and Planning","",""
"uuid:30c436c1-7727-4068-9315-383c040ba6a0","http://resolver.tudelft.nl/uuid:30c436c1-7727-4068-9315-383c040ba6a0","Collision risk assessment in coastal waters","Grossmann, Martin (TU Delft Mechanical, Maritime and Materials Engineering)","Kana, Austin (graduation committee); Hopman, Hans (mentor); Hassel, Martin (mentor); Delft University of Technology (degree granting institution)","2019","The amount of international shipped cargo grows steadily, and seas are exploited for seabed mining and energy production more than ever. As a result, there is an increase in traffic density and decrease in free navigational space, potentially causing a higher incidence of dangerous navigation situations that may lead to ship collisions. This thesis establishes a hypothesis that there are coastal areas where the risk of collision is unexplored and abnormally high and has not been analysed yet. Therefore, the thesis aims to develop a method suitable to assess the risk of collision in coastal waters. A thesis literature review primarily focuses on the current collision risk assessment methods, circumstances of collisions, and sources of navigational information for coastal waters. The literature review concludes that a promising and novel approach is to detect near-collision situations based on the data from the Automatic Identification System (AIS). The near-collision exists when a ship's safety domain is violated and, simultaneously, the ship performed a last-moment evasive manoeuvre, which is identified by an abnormal ship's rate of turn. Building on a previous basic version of the near-collision detection method and AIS data provided by Safetec Nordic AS, this thesis develops a collision risk assessing tool that significantly outperforms the original method. The performance of the designed method was evaluated using AIS data from the Vestfjorden area in Norwegian coast during 2013-2015. The case study shows that this approach effectively detects near-collision situations but identifies a considerable number of new false near-collisions. The details and spatial distribution of detected near-collisions provide valuable insight into navigational areas vulnerable to collisions, collision circumstances, and frequency of collisions.","risk assessment; Collision detection; Automatic Identification System; coastal areas; near-collision","en","master thesis","","","","","","","","","","","","","",""
"uuid:f07bb176-887d-442b-870d-313b7b049774","http://resolver.tudelft.nl/uuid:f07bb176-887d-442b-870d-313b7b049774","Automatic Generation of Legally and Ethically Correct Email Replies","Meijer, Steven (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology); Haveman, Yannick (TU Delft Electrical Engineering, Mathematics and Computer Science)","Rellermeyer, Jan S. (graduation committee); Delft University of Technology (degree granting institution)","2019","In this paper, we will explain how our research into a classified field resulted in the creation of an entity recognizer that can recognize 10 different characteristics, and an intent classifier which is able to classify 21 different intents and automatically generate a response to incoming emails. This is all done within legal and ethical boundaries.","Named Entity Recognition; Intent Recognition; Automatic Response Generation","en","bachelor thesis","","","","","","","","2021-07-05","","","","","",""
"uuid:ace70aa8-6a26-42d2-a655-bd83d948a3c2","http://resolver.tudelft.nl/uuid:ace70aa8-6a26-42d2-a655-bd83d948a3c2","Robotic assembly of interlocking CNC-cut sheets into a wall component: Redesign the CNC-cut elements of the wall component in order to reduce the robotic automatic assembly time","de Bruijn, Jeroen (TU Delft Architecture and the Built Environment)","Stoutjesdijk, P.M.M. (mentor); Așut, Serdar (graduation committee); Delft University of Technology (degree granting institution)","2019","There is a large pressure on the construction industry in the Netherlands because there are not enough skilled employees and 1 million new homes need to be built before 2030. TheNewMakers (TNM) want to radically change how buildings are being constructed in order to speed up the building process. TNM is applying their LEGO inspired building system that consists out of a database with building blocks. One of these building blocks is a wall component, which is applied in the façade of a tiny house. The aim of this research is to apply a robot to assemble CNC cut elements into this wall component. To achieve this goal, the wall component will be redesigned with the main focus on reducing the automatic assembly time.
Literature research was conducted and a robotic assembly process hypothesis was made. This robotic process hypothesis provided input for: a robotic assembly experiment and a robotic simulation. For the experiment only a small robot was available and, therefore, only a section of the wall component was considered. The experiment was conducted in collaboration with HBO mechatronic students. This experiment helped to physically test the suggested improvements of the wall component. The robotic simulation helped to measure how long it would take when a large robot would assemble the whole wall component. This allowed a time comparison between the simulated robotic and manual assembly process of the whole wall component.
The literature research helped to choose a robot for the robotic assembly process hypothesis, which was the industrial robot Smart5 NJ 110-3.0 from Comau. Redesigning the elements of the wall component allowed for an easier assembly sequence. Besides that, extra geometry tolerances ensured certain steps of the robotic assembly process could be performed faster and were less prone to failure.
The robotic simulation showed that the robot is at least 1,17 times faster than a human, which would mean the annual production of the robot is about 5,5 times higher. The CNC milling process takes about 8 times more than the robotic assembly process.","robot; robotics; robotic simulation; RoboDK; cnc; cnc milling; assemblage; assembly; DFA2; Design For Automatic Assembly","en","master thesis","","","","","","","","","","","","","",""
"uuid:2ca126a0-b302-4190-b03d-a9bcf8bed8d4","http://resolver.tudelft.nl/uuid:2ca126a0-b302-4190-b03d-a9bcf8bed8d4","Development and application of a Multidisciplinary Design Optimisation sizing platform for the conceptual design of hypersonic long-range transport aircraft","Clar, Thibault (TU Delft Aerospace Engineering; TU Delft Flight Performance and Propulsion)","Oliviero, F. (mentor); Verstraete, Dries (graduation committee); Dirkx, D. (graduation committee); Veldhuis, L.L.M. (graduation committee); Delft University of Technology (degree granting institution)","2019","With the global increase in passenger traffic and growing popularity of long-haul routes over the Asia Pacific region and Atlantic Ocean, the possibility for hypersonic transport could become an attractive option to reduce flight time over long distance from 16-20 hours down to around 4-5 hours. In this thesis, a Multi-Disciplinary Optimisation platform has been developed to allow for the optimal sizing of hypersonic transport vehicles using vehicle take-off mass as the performance indicator subjected to fuel volume and payload height constrains. The current platform is applied to the LAPCAT A2 hypersonic long-range transport configuration by Reaction Engines, to determine the impact of range and cruise Mach number on the design of hypersonic aircraft. Results show that the optimal shape is greatly dependent on the aircraft range and fuel volume constraint. Additionally, the optimum hypersonic cruise Mach number is dictated by a trade-off between mission time, engine efficiency and Thermal Protection System mass.
The objective of this thesis is to automatically measure the dimensions from a digital 3D model of an upper limb stump which are required to create an upper limb prosthetic socket. The main method used in this thesis is Statistical Shape Modelling (SSM). We used Singular SSM and Multiple SSM as the approaches in this project. Geodesic distance and Intersection Line are used as measurement methods.
In order to validate the capability of the measurement algorithm to work with real human models, an experiment was conducted to test the precision of the algorithm. Nineteen participants with normal hands were 3D scanned. The manual measurement values were then compared with the values from the 3D scans by using both SSM approaches.
We propose an algorithm for automatic measurements of the human upper limb digital model for prosthetic application. The automatic measurement algorithm proved that we can measure real human upper limb for prosthetic application without human intervention. The Multiple SSM approach showed a sufficient result to be used in prosthetic application for upper limb socket. In the future, the resulting 3D-printed socket can be tested on upper limb amputees.
Objective - The objective of this research is to determine if and how Big Data analysis can be used to model (future) demand for Crew Transfer Vessels (CTV) being used for Crew Transfer Operations (CTO) in the offshore wind industry.
Methodology - Almost 45 million AIS location reports of 39 CTVs servicing 263 turbines in 3 offshore wind farms throughout 2016 are analysed to derive key figures of the executed CTOs. Key figures are e.g. the weather window, and the number of executed CTOs per hour. The CTV demand is modelled based on these key figures, and three wind farm specific input parameters: number of turbines, distance between wind farm and port, and the sea state distribution.
Results - The CTO demand of the 263 analysed wind turbines was on average 113 per year in 2016. This average CTO demand has a variation of almost 50% between turbine types. With an accuracy of 4%, it is modelled that a yearly average of 12.4 CTVs are needed to service these 263 turbines. Furthermore, the CTV demand decreases on average with 11% in the three analysed wind farms if the CTO limit can be increased from 1.5 m to 2.0 m mean significant wave height. This result in a potential cost saving around € 5.1 million on a yearly basis for these three wind farms alone.
Implications - AIS data can be used to model vessel demand and gain insight into the market size. The accuracy of the developed model can be improved by adding: more wind farm specific variables; and/ or data of more CTVs/ wind farms. The gained knowledge about using Big Data analysis to forecast the CTV market size is useful and important for the introduction and future development of commercial AIS based data analysis. Furthermore, it provides insights into the operational profile of CTVs. This can be used to develop better vessels, better service the market and ultimately help to lower the cost price of offshore wind energy. It is believed that the maritime sector could profit from AIS data analysis.
into the development of an automatic PV system design algorithm.
The research consists of two main parts. A section related to panel placement and a section related to the inverter choice. Strategies were developed for both parts and that resulted in several prototype algorithms. These algorithms were tested to see if these algorithms have practical potential. The panel placement algorithms are divided into two categories: maximum panel placement and finite panel placement. Development of these categories was done for both flat and pitched roof sections. The shading caused by surrounding obstacles was taken into account to find the most optimal positions to place the panels. A grading system for the panel layout was developed to ensure that preferred layouts
are found. The shading caused by the surrounding obstacles was also used to determine the type of inverter is that optimal for certain panel configurations.
A proof concept was conducted for the developed algorithms by testing algorithms on real roofs and comparing the results with PV system designs that were made manually. The designs were compared with each other in terms of predicted PV system performance, the layout and the duration of the designing process.
It was demonstrated that the finite panel placement algorithm produced PV systems with an average performance difference of about - 0.83 % with a standard deviation of 5.26 %, compared to the manual designs. The maximum panel placement algorithm took an average of 2.7 minutes with a standard
deviation of 2.1 minutes. The finite panel placement algorithm took on average 3.8 minutes with a standard deviation of 5.5 minutes. Both of which can be considered as being significantly faster than making a manual design.
For a PV system with many panels and several string inverters, algorithms were developed that predict the optimal configuration of the strings. It also predicts the total performance losses that occur when panels are connected in series. These algorithms can help to determine the optimal type of inverter and how to optimally configure separate panel strings in large systems. The algorithms work
based on initial tests, but not enough testing has been done to be conclusive.
The work done in this thesis can be used as a stepping stone for further development of automatic PV systems design algorithms. The panel placement algorithms and the inverter algorithms can be developed further into a complete automatic PV system design algorithm.","PV; Automatic; LiDAR","en","master thesis","","","","","","","","","","","","Electrical Engineering | Sustainable Energy Technology","PVision",""
"uuid:9ddc8a10-29bf-491b-9a46-5e858bd9ccbe","http://resolver.tudelft.nl/uuid:9ddc8a10-29bf-491b-9a46-5e858bd9ccbe","Leveraging lighting systems with novel color sensor-based applications","Zhang, Ruiling (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Software Technology)","Zuñiga Zamalloa, Marco (mentor); Delft University of Technology (degree granting institution)","2017","Lighting systems are attracting many researchers and companies to investigate the potential of light beyond illumination, by creating new smart illumination systems or developing indoor positioning methods. The main challenge in realizing novel systems is to process light information in such a way that new insights are discovered. There are typically two ways to measure and process light: through photodiodes, which are cheap, but offer little information; or through cameras, which offer much information, but are expensive and create privacy issues. There is however a third type of sensor that has not been investigated much in lighting systems: color sensors. Color sensors can be viewed as a middle-of-the-road approach between photodiodes and cameras. Color sensors are inexpensive, yet provide more information than simple photodiodes.
This thesis proposes two novel color sensor-based methods to enable (i) a dynamic tunable lighting system and (ii) a light-based indoor tracking system. The former allows retailers to present their merchandise in an appealing way to their customers (by adapting the light in their shops based on the products' colors). The latter makes it possible to track objects by exploiting solely their exterior color (without modulating the light source or requiring objects to carry optical receivers). Our experiments indicate that the methods we propose are able to handle the complex lighting conditions one would encounter in realizing a dynamic tunable lighting system. Furthermore, our results prove that indoor tracking of objects is possible, given that objects are sufficiently distinct in their color. The accuracy of correctly identifying, and thus tracking an object is found to be 91.4%.","Color Detection; Color Sensor; Color Temperature; Automatic tunable white lighting; Passive Sensing; Indoor Tracking","en","master thesis","","","","","","","","2020-09-30","","","","Electrical Engineering","",""
"uuid:1b6d2572-4aa3-45bd-aaad-bf8a8ffcf1d2","http://resolver.tudelft.nl/uuid:1b6d2572-4aa3-45bd-aaad-bf8a8ffcf1d2","Towards Automatic Reverb Addition for Production Oriented Multi-Track Audio Mixing","Pujahari, Abhinav (TU Delft Electrical Engineering, Mathematics and Computer Science)","Liem, Cynthia (mentor); Hanjalic, Alan (graduation committee); Broekens, Joost (graduation committee); Delft University of Technology (degree granting institution)","2017","Sound spatialization is a natural, intuitive but sparsely researched topic in multi-track audio mixing. Although a lot of research has been devoted to the automatic fader gain settings, addition of dynamic range equalization and related effects, delay and Reverb have taken a backseat. The dichotomy in the artistic and engineering approaches to audio mixing have resulted in studio best practices not given their due with suitable algorithmic interpretations.
Due regard to studio practices along with a more holistic approach combining all the steps of audio mixing are especially necessary in the background of the exponential growth of bedroom studio producers and musicians, mixing and crafting their tracks personally. The additional growth in the availability of faster personal computing only fuels this trend.
This thesis attempts to be an exploratory foray into the addition of Reverb to production oriented multi-track mixing. Taking into account studio practices, 2 different algorithms are compared with a professionally mixed track and an unreverberated reference track. The results from hidden reference listening tests are analyzed to draw conclusions of the effectiveness of automatic methods of Reverb addition against the professionally mixed track.
The results suggest that the current algorithms implemented are unable to reach the subjective perceptual quality of the professionally mixed track. However, some important conclusions are drawn from the theoretical and experimental research which provide clear guidelines for possible future implementations.","Automatic mixing; Reverberation; Music Mixing; Music Production; Automatic Reverb Addition","en","master thesis","","","","","","","","","","","","Electrical Engineering","",""
"uuid:53cc492f-1fd0-4b0e-a879-286f54258904","http://resolver.tudelft.nl/uuid:53cc492f-1fd0-4b0e-a879-286f54258904","A study of the 1984 report An Automatic Proof Procedure For Several Geometries by Th. Bruyn and H.L. Claasen","Bruyn, Tim (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Delft Institute of Applied Mathematics)","van Neerven, Jan (mentor); van den Dries, Bart (graduation committee); Hart, Klaas Pieter (graduation committee); Tonino, Hans (graduation committee); Delft University of Technology (degree granting institution)","2017","This report is a discussion of the 1984 report 'An automatic proof procedure for several geometries' by Th. Bruyn and H.L. Claasen, inspired by a personal desire to understand the work of Th. Bruyn. See: http://resolver.tudelft.nl/uuid:b768c6ce-f625-4236-9b0b-32a47fab143e
Bruyn and Claasen prove that certain true propositions of the theory of intersections within the two-dimensional projective geometry over the real numbers can be formulated by use of figures. It is proven that figures obtained by manipulating these figures will also correspond to propositions. The method to do so proves that the obtained propositions are a direct consequence of the original propositions and are therefore proven to be true. One of their main results is to use the theorem of Pappus to generate the theorem of Desargues, thereby proving that Desargues follows from Pappus (something that is well known in projective geometry).
This report aims to give a comprehensive explanation of their method as well as a detailed demonstration of their procedure. It is a summary of their work with added explanations and examples.","Automatic; Proof; Procedure; Projective; Geometry; History","en","bachelor thesis","","","","","","","","","","","","","",""
"uuid:75480279-c222-4f13-addc-4afd52e38c24","http://resolver.tudelft.nl/uuid:75480279-c222-4f13-addc-4afd52e38c24","Creating a Mood Database for automated affect analysis","Albeda, J.","Redi, J.A. (mentor)","2016","Affect-adaptive systems are dependent on their ability to automatically recognize a user’s affective state. This study aims to contribute to the creation of an affect-adaptive system that can recognize negative moods of elderly in care homes from a video feed, and improve it by adapting the lighting in the room. An affective database of videos portraying different moods is required to train such a system. While many affective databases exist already, they are primarily targeting emotions rather than mood. Therefore, we introduce a new database of annotated videos that can be used for mood recognition. To maintain control over which moods are depicted in the videos in the database, we combine the use of mood induction and acted performance to portray the moods in a realistic way, incorporating in the acted scripts the results from a series of interviews with caretakers in care homes. The database covers three visual modalities: body, face and 3D Kinect data for a total of 24 hours of recorded video material. We use crowdsourcing to annotate such a large amount of material in terms of perceived mood of the person portrayed in the videos, by outsourcing via the internet the annotation task to a large number of paid annotators. A risk of using crowdsourcing is unreliable annotator performance, due to the low level of control applicable to the annotation process. We deal with this problem by filtering the annotations according to predefined criteria, checking for task commitment and self-consistency of the annotators. We validate our use of the combination of induction and actors with a comparison between the intended mood, the mood felt by the actors, and the mood perceived by annotators. Furthermore, we demonstrate that crowdsourcing is a promising tool for the annotation of mood.","automatic affect recognition; crowdsourcing; multimodal database; emotion recognition; mood recognition; affective annotation","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Intelligent Systems","","Interactive Intelligence","",""
"uuid:25aa7faa-8111-4198-a247-6eb84a5e49ef","http://resolver.tudelft.nl/uuid:25aa7faa-8111-4198-a247-6eb84a5e49ef","Model-based leak localization in small water supply networks","Moors, J.","Van der Hoek, J.P. (mentor); Scholten, L. (mentor); Den Besten, J. (mentor); Van de Giesen, N.C. (mentor)","2016","Small leaks in water supply networks often remain undiscovered, resulting in large amounts of lost water. Moreover, small leaks can grow larger over time and may result in pipe bursts, having negative consequences for the surroundings. An automatic leak localization method is required to decrease the search area and hence localize small leaks earlier. In this research, the automatic leak localization method of Quevedo et al. (2011) is validated in DMA Leimuiden (the Netherlands). A prerequisite of the localization method is a detailed consumption distribution of the inflow for the hydraulic model. The goal of this research is to study the need for a detailed consumption distribution model in a DMA with a small MNF compared to the leak size (MNF: 4.5 m3/h, leak size: 5.2 m3/h, 7.5 m3/h and 15 m3/h). The leak localization method was applied to eight artificial leaks that lasted 15 minutes and measurements of one day of a real leak (5.2 m3/h). Leak localization results of the artificial leaks showed that there was no influence of the consumption distribution during the night. The leak localization method performs the same with both consumption models in case of low flow conditions and when leak localization results of the real leak for a whole day are combined. The performance of the leak localization method depends on the location of the leak. For some leak locations more flow in the system is required to create detectable head loss at the sensor locations. Uncertainties in the model cause larger pressure variations with higher flow conditions and a more detailed consumption distribution model must be used when there is more flow (morning peak). Too short measurement periods make the leak localization result sensitive to unexpected consumption inside the DMA. An accumulation of hourly results of a whole day makes the method more robust and gave satisfying performance irrespective of the used consumption models and with only 6 pressure sensors inside the network.","automatic leak localization; DMA; leak detection","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Water Management","","Water Resources","","52.222387, 4.672115"
"uuid:35996aca-8365-4927-ada3-f9859e55d5fa","http://resolver.tudelft.nl/uuid:35996aca-8365-4927-ada3-f9859e55d5fa","Controlling and reducing case picking in the supply chain: A case study of Unilever & Kuehne + Nagel","Wammes, M.","Verbraeck, A. (mentor); Van Duin, J.H.R. (mentor); Duinkerken, M.B. (mentor); Loonstra, H. (mentor)","2015","","Case picking; Unilever; Kuehne + Nagel; Supply chain; Manual pick; Automatic Layer Picker; BBD; Best Before Date; Incomplete inbound; Customer behaviour","en","master thesis","","","","","","","","","Technology, Policy and Management","Policy Analysis","","Transport Infrastructure and Logistics","",""
"uuid:33257d89-3016-4b85-8fa3-517fb3abd314","http://resolver.tudelft.nl/uuid:33257d89-3016-4b85-8fa3-517fb3abd314","Supervised Learning for Measuring Hip Joint Distance in Digital X-ray Images","Krishnakumari, P.K.","Vilanova, A. (mentor); Flipse, I. (mentor)","2015","Osteoarthritis is a degenerative joint disease which is hard to diagnose objectively and may vary based on the surgeon. This disease is usually diagnosed by measuring several characteristic features of Hip X-rays mainly the joint distance between the femoral head and acetabular cup. Hip joint distance reduction is a clear symptom of Osteoarthritis as it suggest cartilage disappearance. Hip joint distance metric involves segmentation of the femur and pelvis in X-rays, which is a challenging task because of contrast variations as well as external factors like anatomical and pose-variation. A multiscale approach based on Machine Learning is presented in this work for the segmentation of multiple bone structures. This technique uses landmark detection via data-driven joint estimation of image displacements and introduces a unique refinement step for improving the accuracy of detection. The detection is based on supervised learning using manually annotated landmarks. Therefore, the landmark placement along the edge of the bone has been covered in detail. The detected landmarks are then used to determine the joint distance in several locations along the hip joint. Aside from the segmentation technique, this work also introduces novel joint distance metrics which can be used to detect joint space narrowing. A detailed quantitative evaluation proved this work to be superior to the current state-of-the-art segmentation that handles multiple bone structures and is the first in evaluating the joint space width metric. We have also considered and discussed in brief the impact of such a system for diagnostic purposes.","joint space; landmark detection; Active Shape Models; 2D gradient profiling; X-ray image; automatic segmentation; supervised learning; osteoarthritis; machine learning","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Graphics and Visualization","","Masters ICT Innovation","",""
"uuid:2cce8b0b-4689-4617-bb02-729c98a01b82","http://resolver.tudelft.nl/uuid:2cce8b0b-4689-4617-bb02-729c98a01b82","On-line parameter updating as an optimisation tool for Decision Support Systems","Schellingerhout, J.","Van De Giesen, N.C. (mentor); Van Overloop, P.J.A.T.M. (mentor); Sadowska, A.D. (mentor); Mondeel, H.J. (mentor); Hummel, S. (mentor)","2014","Decision Support Systems (DSSs) that are used nowadays by water managers often predict states that do not correspond with the observed states. This is caused by changing parameters in the real systems, while the parameters used in the current DSSs are kept at a fixed level or follow a temporal pattern that does not always represents reality. Usually, these parameters are calibrated in an off-line setting, but when utilising in an on-line system there is a significant drift in performance. Therefore, there is a high need to some form of on-line parameter estimation that reduces the differences between the modelled and observed states. The objectives of this study, in order to reduce the differences between the modelled and observed states, read: (1) defining the state-of-the-art knowledge on optimisation of DSSs regarding on-line parameter updating and optimisation techniques for modelling of large-scale river networks; (2) determining whether automatic parameter updating is possible with reasonable results in a twin experiment set-up for different normative scenarios, with respect to parameter identifiability, model bias and model performance; (3) determining whether automatic parameter updating is possible with real measurement data, with respect to the same performance indicators; and (4) determining how much the performance does improve when implementing some form of parameter updating. The first objective is addressed by former studies (e.g.[2],[4],[7],[9]), which have confirmed On-line Parameter Estimation (OPE) can be applied successfully as a tool to decrease model discrepancies. Both the Doesn’t Use Derivatives (DuD) algorithm, [1], and the Shuffled Complex Evolution (SCE) algorithm, [2], have proved to be robust and effective methods for parameter estimation in multiple fields of expertise, e.g. [3],[4],[5],[6],[7]. The DuD algorithm is utilised in this study, since initial model results have illustrated that the high robustness level of the DuD algorithm. The second objective is addressed by constructively up-scaling the amount of calibration parameters by using several scenarios. The optimisation results are analysed extensively regarding the model performance in terms of robustness, effectiveness, efficiency and model bias. Prior to the OPE, an initial model analysis is performed to determine the model sensitivity to parameter perturbations and identifiability and uniqueness of the optimisation parameters. The analyses of the scenarios’ results demonstrate a high level of model performance, in terms of the performance indicators, in a twin experiment set-up. However, coincidentally the bias follows the temporal pattern in model states, which is probably a numerical error induced by the OPE tool. Nonetheless, the level of bias is sufficiently low to neglect this effect. Third objective is addressed by following the same procedure as for the second objective. However now, the observational data is assigned with white noise in order to facilitate upscaling of the twin experiment set-up to field conditions. The analyses of the results illustrate that up-scaling to field condition is very well possible, since the results show high levels of robustness, effectiveness and efficiency while suppressing the model bias. The fourth objective is addressed by implementing OPE in an existing DSS. Assignment of practical real scenarios, like river maintenance programmes, illustrates the necessity of the OPE tool to accurately estimate the correct parameter values, thereby improving the model performance of the original DSS. The transition zone between two parameter values in time, however, is not predicted, as sharp transitions cannot be predicted well as result of the used calibration window with the assumptions of this study. Moreover, local transitions in parameter values are difficult to predict by the OPE tool. Concluding, this study demonstrates that it is essential to use some form of OPE to predict the actual parameter values accurately for highly varied scenarios. This statement is grounded by the high level of performance indicators that have been observed in the results of the OPE tool. The computation time is sufficiently low that it is applicable in real-time systems. However, more research on discretisation of the transition phase, on inclusion of control actions and on other types of additive noise is required before implementing the tool in a real system. Furthermore, the added value to the model performance of using more observation locations and more parameters should be investigated.","Decision Support Systems; automatic calibration; on-line parameter estimation; updating; operational water management; optimisation; DuD; river networks","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Water Management","","Water Resources","","51.62657, 5.222788"
"uuid:43eb09b3-0874-40b7-b739-04c397538c89","http://resolver.tudelft.nl/uuid:43eb09b3-0874-40b7-b739-04c397538c89","Designing Parametric Modeling for Bioprinting Ear Cartilage","Feenstra, C.J.","Goossens, R.H.M. (mentor); Verlinden, J.C. (mentor); Song, Y. (mentor); Bos, E.J. (mentor); Wolff, J. (mentor)","2014","At the VU University Medical Center, Amsterdam, a new process for total ear reconstruction is being developed. The current ear reconstruction process involves the sculpting of a scaffold which will be wrapped in skin to form the shape of the new ear. This requires the harvesting of large amounts of rib cartilage needed for construction of the ear scaffold, and only limited personalization of the new ear scaffold is realized. The new process has the patient's remaining ear 3D-scanned after which a standard parametric implantable model of a scaffold has to be adjusted to fit the 3D scan. This model is then 3D printed and used in the reconstruction of the ear. This project focuses on the fitting of the parametric model to the scan data. Based on a model made by Tom Scholten, a parametric model is further developed using the Rhinoceros3D plug-in Grasshopper. A MATLAB program is developed, along with a graphical user interface. An automatic feature has also been developed and tested. While the software and model combination is but a proof of concept and only a step in the development of the whole new ear reconstruction process, the results obtained in this project are notable: the shape of the parametric model and its adjustment parameters were evaluated by VUMC experts and found to be good for this stage of the project. The MATLAB program also performed well, but is still subject to further research. The user interface obtained positive feedback from a user test as well as from an accompanying questionnaire. Finally, a number of recommendations for further research and development are given.","ear; parametric; matlab; automatic; scaling; reconstruction; interface","en","master thesis","","","","","","","Campus only","2015-08-25","Industrial Design Engineering","Design Engineering","","Master of Science Integrated Product Design","",""
"uuid:eec252e0-1a6b-4a3a-a167-45aaadc2bb7f","http://resolver.tudelft.nl/uuid:eec252e0-1a6b-4a3a-a167-45aaadc2bb7f","Automatically deriving and updating attribute road data from movement trajectories","Van Winden, K.B.A.","Biljecki, F. (mentor); Van der Spek, S.C. (mentor)","2014","There are many applications that use maps, and more detailed the maps are, more applications can benefit from the map. Many maps are still manually created and also the underlying attributes are informed in a manual process. This thesis presents a method to automatically derive and update attribute road data by mining and analyzing movement trajectories. The method used for this thesis implemented OpenStreetMap as the underlying map which will be updated. GPS tracks are the movement trajectories that will be used to derive the information for the road attributes. There are attributes that are already conceptually present in OpenStreetMap but are in practice rarely filled. The attributes that will be investigated to derive are: whether the road is a one or two way road, the speed limit of the road, the number of lanes of the road and which vehicles have access to the road. Also, new attributes are introduced in this thesis. These are the average speed of the road, the hours in which the road is congested, the importance of the road and whether the road has a certain geometrical error. Preprocessing is performed before the attributes can be derived. Important are the classification of the transportation mode of the GPS tracks and the map matching of the GPS points to the roads they are on. When the IDs of the roads, where the GPS points are on, are known the attribute extraction algorithms can be applied. These algorithms all have different methods for deriving their attributes. There are attributes that use the speed of the GPS point, the distance from the point to the road or the heading of the point. For some attributes, a hierarchical code list is created to provide different perspectives on the error of the attributes. The code list consists of the values of the attributes and the hierarchy between these lists describes the level of detail and the granularity of the values. While some attributes were classified correctly in almost 100\% of the cases, the extraction of the attributes were not all successful. The number of lanes proved to be too difficult to derive out of the available data and the importance of a road relies on a complete coverage of data which was not the case. Although, the latter is applied in this research. The other attributes had different results, the accuracy of the classification of the speed limit was 69,2\%. However, when taking into account speed limits that are only one step away (e.g. 60 km/h instead of the classified 50 km/h) the classification increases to 95\%. The classification of the roads that allow bicycles was 74\% and the attribute to determine whether a road is a one or two way road has a classification accuracy of 99\%. In the future, the attribute extraction algorithms could be improved or expanded. More detailed levels in the hierarchical code list could be added and constraints could be added to improve the attributes. Also, some techniques might be enhanced for better results. Finally, the ideal application of this method would be deriving and updating the attributes in real-time. This could lead to live maps which would change real-time with the changes on the road.","automatically; deriving; updating; movement; trajectories; openstreetmap; gps; attributes; road","en","master thesis","","","","","","","","2014-06-26","Architecture and The Built Environment","Geomatics","","GIS Technology","",""
"uuid:31380219-f8e8-4c66-a2dc-548c3680bb8d","http://resolver.tudelft.nl/uuid:31380219-f8e8-4c66-a2dc-548c3680bb8d","Automatic generation of CityGML LoD3 building models from IFC models","Donkers, S.","Ledoux, H. (mentor); Zhao, J. (mentor); Stoter, J.E. (mentor)","2013","CityGML is a standardized data format used to store the semantic information and geometries of buildings and other object classes of 3D city models. The Level of Detail of current state of the art city models (LoD2) is not sufficient for accurate environmental simulations like noise, the solar potential of windows and other types of analyses. An LoD3 building model represents the full architectural exterior of a building with balconies, windows and so forth. The generation of these models needs to be automated as it is otherwise infeasible due to the required high amount of manual labour. In the architectural world, detailed building models are created in IFC format. This thesis shows that it is possible to automatically generate valid and semantically rich CityGML LoD3 building models directly from IFC models. Also an initial investigation is done on the possibilities for the conversion of IFC models to CityGML LoD4. For the conversion the semantic and geometric validity requirements are determined for CityGML. A methodology for the conversion is developed and a prototype implementation is made to prove the effectiveness of the conversion. The conversion consists of three parts: 1) The extraction and mapping of IFC semantics to CityGML semantics; 2) A geometric generalization which extracts the exterior shell using a transformation based on Boolean and morphological operations; 3) Semantic and geometric refinements which optimize the model for analyses. The developed prototype is able to successfully convert IFC models to CityGML LoD3. All the resulting models were geometrically validated according to the ISO19107 standard, and semantics were checked manually. Few improper semantics occur in the output due to missing semantics in IFC. For example, there are no semantics for balconies or dormers in IFC. Recommendations are given to improve the alignment between the two formats. For IFC additional semantics are recommended whereas it is important for CityGML to specify how certain aspects are to be modelled. The research presented in this thesis can be used as the foundation for future work on the interoperability between Architecture and Geomatics. The software package is open source and freely available at https://github.com/tudelft-gist/ifc2citygml.","automatic; conversion; generation; building; models; IFC; CityGML; LoD3; LoD4; valid; ISO19107; 3D; GIS; semantic; mapping; geometry; transformation; morphological; operations; dilation; erosion; closing; Minkowski sum; Boolean; non-manifold; 2-manifold; solid; shell; degenerate; floating-point; arithmetic","en","master thesis","","","","","","","","2013-12-20","OTB Research Institute for the Built Environment","GIS technology","","Geomatics","",""
"uuid:0e6239a1-5050-42c1-b104-3134ba5273cd","http://resolver.tudelft.nl/uuid:0e6239a1-5050-42c1-b104-3134ba5273cd","An approach to the automatic synthesis of controllers with mixed qualitative/quantitative specifications.","Tasoglou, A.","Mazo Jr., M. (mentor)","2013","The world of systems and control guides more of our lives than most of us realize. Most of the products we rely on today are actually systems comprised of mechanical, electrical or electronic components. Engineering these complex systems is a challenge, as their ever growing complexity has made the analysis and the design of such systems an ambitious task. This urged the need to explore new methods to mitigate the complexity and to create simplified models. The answer to these new challenges? \textit{Abstractions}. An abstraction of the the continuous dynamics is a \textit{symbolic model}, where each ``symbol'' corresponds to an ``aggregate'' of states in the continuous model. Symbolic models enable the \textit{correct-by-design} synthesis of controllers and the synthesis of controllers for classes of specifications that traditionally have not been considered in the context of continuous control systems. These include \textit{qualitative} specifications formalized using temporal logics, such as \acf{LTL}. Besides addressing qualitative specifications, we are also interested in synthesizing controllers with \textit{quantitative} specifications, in order to solve optimal control problems. To date, the use of symbolic models for solving optimal control problems, is not well explored. This MSc Thesis presents a new approach towards solving problems of optimal control. Without loss of generality, such control problems are considered as path-planning problems on finite graphs, for which we provide two shortest path algorithms; one deterministic \acf{SDSP} algorithm and one non-deterministic \acs{SDSP} algorithm, in order to solve problems with quantitative specifications in both deterministic and non-deterministic systems. The fact that certain classes of qualitative specifications result in the synthesis of (maximally-permissive) controllers, enables us to use the \acs{SDSP} algorithms to also enforce quantitative specifications. This, however, is not the only path towards our goal of synthesizing controllers with mixed qualitative-quantitative specifications; it is possible to use the \acs{SDSP} algorithms directly to synthesize controllers for the same classes of specifications. Finally, we implement the algorithms as an extension to the \texttt{MATLAB} toolbox \texttt{Pessoa}, using Binary Decision Diagrams (BDDs) as our main data structure.","automatic synthesis; discrete abstraction; symbolic model; qualitative specifications; quantitative specifications; LTL; Pessoa; embedded control","en","master thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Delft Center for Systems and Control","","Embedded Systems","",""
"uuid:7bec892b-9fc7-4349-bbbf-bbfd4c859f19","http://resolver.tudelft.nl/uuid:7bec892b-9fc7-4349-bbbf-bbfd4c859f19","Automatic maps on the Gaussian integers","Krebs, T.J.P.","Fokkink, R.J. (mentor)","2013","How can you define automatic maps on the Gaussian integers? The two key components of an automatic map are an automaton, and a numeration system that represents every Gaussian integer at least once. We start by giving a brief introduction to automata and language theory, and go on to establish the existence of a numeration system for the Gaussian integers in every base. The literature is quite scarce on this latter subject, however, so we have to reproduce a referenced result that proved too hard to find. With the basic components covered, we define the concept of automatic maps, and show that it does not rely on the particular choice of numeration system in a given base. We then continue to prove that a map is automatic with respect to every multiplicatively dependent base, and show that there exist automatic maps that are not automatic in any multiplicatively independent base. Consequently, it reveals partly how an analogue of Cobham's deep theorem for the Gaussian integers will look like, and answers an open question in the literature negatively.","automata; numeration systems; gaussian integers; automatic sequences; automatic maps; regular languages; radix systems; cobham; ring","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Delft Institute of Applied Mathematics","","Applied Probability","",""
"uuid:afb5c976-333b-4964-aa61-b38b20069bb7","http://resolver.tudelft.nl/uuid:afb5c976-333b-4964-aa61-b38b20069bb7","Derivation of and Simulations with BiGlobal Stability Equations","Groot, K.J.","Van Oudheusden, B.W. (mentor); Pinna, F. (mentor)","2013","Laminar to turbulent transition has an important role in the aerospace domain in view of its impact on aerodynamic drag and, regarding the high velocity regime, heat transfer. State of the art computational methods, like DNS, LES and RANS are found to be too expensive or rely on case dependent turbulence models to be used for obtaining information regarding the transition phenomenon. Transition is typically initiated by the onset of instability of the laminar flow. Linear stability theory describes the eigenmode growth mechanism. Although this yields a restriction, because additional mechanisms play a role too, the eigenmode growth phase establishes an important base in many practical situations. However, the linearization provides a considerable step in the simplification of the analysis, while the stability theory can be adapted according to the structure of the given mean flow. At the Von Karman Institute (VKI), the VKI Extensible Stability and Transition Analysis (VESTA) toolkit has been developed, which mainly involves methods based on the linear stability theory. In the current project, the main goal was to extend the already present tools to incorporate the BiGlobal stability equations, which, together with appropriate boundary conditions, form an eigenvalue problem. This particular problem is solved for perturbations inhomogeneous in two spatial directions and their complex growth rate and frequency. This extension involved a new version of the tool for the derivation of the BiGlobal stability equations, a tool for their automatic implementation in Matlab via the spectral collocation method and a simulation tool to apply boundary conditions and execute the analysis corresponding to a prescribed mean flow. The derivation of the BiGlobal equations and their verification formed the first part of the project. Both incompressible and compressible versions are derived for different kinds of coordinate systems (e.g. Cartesian and cylindrical) and formulations in the compressible case (e.g. involving temperature and pressure and the energy equation based on static enthalpy). This allowed the verification of the tool with a large number of previously published references. All references, to the knowledge of the current author, that have thus far reported the compressible equations were found to contain errors and had to be cross-verified to yield the ultimate positive outcome. It is hence deemed that the present treatment is the first to report the full compressible BiGlobal stability equations in primitive variable formulation correctly. The second part of the project involved the verification of the performance of the combination of the derivation, implementation and simulation tools. This was done by considering three test cases (mean flows). In all cases, the eigenvalue problem was solved using the QZ algorithm. In cases that required high resolution, the Arnoldi algorithm was used in addition, because of its lean performance with respect to required memory. The first test case was the parallel Blasius boundary layer. Because of its one-dimensional nature, this flow has been intensively analysed in the past by means of the classic local stability analysis type (LST). This allowed the BiGlobal analysis of this mean flow to be thoroughly verified in both the incompressible and supersonic regime. The second case involved the developing incompressible Blasius boundary layer. This flow was chosen because of its better affinity with the actual Blasius boundary layer flow, which has an intrinsic developing nature. The BiGlobal approach involved artificial in- and outflow boundary conditions. Analyses were performed on a domain with a small and large streamwise extent to focus on a flow that is weakly and strongly developing, respectively. The former analyses were again compared to LST simulations to yield an internal verification and consistency check. The results of the analyses on the larger domain could be compared to the literature and were found to agree well in a qualitative sense. The Tollmien-Schlichting branch obtained in this study was found to lie too high with respect to the one reported in the literature. Although the exact reason for this could not yet be established, the most likely cause is a (small) difference in the prescribed mean flow. It is expected that the test case will yield identical results when exactly the same mean flow will be used, as some key differences can be identified in the literature in this regard. It was found that the artificial boundary conditions caused an odd/even effect with respect to the continuous eigenmode branches in the spectrum when the number of points in the streamwise direction was taken to be either odd or even. A similar behaviour was observed when consulting the literature, although the effect was never elaborated on explicitly. Lastly, the incompressible complex lamellar bidirectional vortex was considered. This mean flow is defined on a cylindrical coordinate system and is highly inhomogeneous in at least two spatial directions. Therefore, this case requires the BiGlobal approach and all power of the newly developed tools could be tested. A test case handled in the literature was very precisely reconstructed. Although it was found that no part of the spectrum was converged, the results were nearly identically retrieved. The solutions to all three test cases have been obtained successfully and compare reasonably well with the literature. It is therefore concluded that all capabilities of the newly developed tools have been tested successfully and the tools can be considered to be verified.","BiGlobal linear stability; VESTA; automatic derivation; non-parallel","en","master thesis","","","","","","","","","Aerospace Engineering","Aerodynamics","","","",""
"uuid:be3cf53e-1935-4b64-b903-06a3870e5c98","http://resolver.tudelft.nl/uuid:be3cf53e-1935-4b64-b903-06a3870e5c98","Automatic classification of vault jumps using video analysis","Oppedijk, P.L.","Veeger, H.E.J. (mentor)","2013","In sports, the use of motion-capture techniques increases, leading to a fast increase in valuable motion data. Automatic recognition and classification of the captured motions, provides an orderly structuring of the motion data. By this the users can easily retrieve specific motion data. In this thesis, we consider the automatic classification of vault jumps in gymnastics, captured by a high speed video camera system. A vault jump consists of a sequence of motions belonging to a predefined motion label, such as a Handspring. Then, the vault classification problem consists of automatically recognizing a vault jump in a video recording and assigning the appropriate label to the recording. To this end, we segment the vault classification problem into a sequence of vault-section classification problems. The following vault-sections are proposed; Type of Vault (TV), Number of Somersaults (NS), Type of Somersault (TS) and Number of Twists (NT). The segmentation into vault-sections allows for the development of a versatile classification system, capable of classifying a large number of vault classes based on a limited amount of data. Next, we use video analysis techniques to transform a video recording into feature representations, or so called feature sets, which reflect the specific characteristics of the vault jump throughout the four vault-sections. The four vault-section feature sets are then classified, resulting in four vault-section classifications. The final labeling of the recording of a vault jump is by the combined results of the four vault-section classifications. The proposed automatic vault classification system is based on the vault jump recordings made by Van de Eb et al. [1] at the world championships in gymnastics 2010. Extensive experiments have been conducted on these recordings for evaluating various feature sets and classifiers per vault-section, resulting in one best performing combination per vault-section. Furthermore, the vault-section classifications are evaluated on their influence on the classification performance of a complete vault jump. In the end, an overall classification rate of 69.5%, with a correct classification accuracy of 90.2%, is obtained for the classification of the vault jump recordings.","automatic vault classification","en","master thesis","","","","","","","","2013-09-11","Mechanical, Maritime and Materials Engineering","BioMechanical Engineering","","BMD","",""
"uuid:bb2d7a13-1bef-4545-bca0-f2b084a04240","http://resolver.tudelft.nl/uuid:bb2d7a13-1bef-4545-bca0-f2b084a04240","WebLab project","Van der Tuin, M.; Reijm, A.B.; De Jong, T.K.; Smits, J.","Visser, E. (mentor); Vergu, V.A. (mentor); Zaidman, A.E. (mentor)","2013","WebLab is an online academic tool used to improve education by providing a framework for teachers to supply a higher quantity and quality of assignments to students. Currently this system is being used in a variety of courses including the Concepts of Programming Languages course taught to bachelor students at the Delft University of Technology. As a tool is used more and more functionality must be added in order to meet the ever increasing demands. The goal of this project is just that, expand on the current system to provide support for a new set of features. Specifically two new courses want to start using WebLab. MySQL support is added to support the database part of the Web & Database Technology course so students can execute queries and test their code against the correct queries without seeing those queries. Java support is added to provide extra practice material for the Object-oriented Programming in Java course; students who are new to programming have a chance to practice with the material at their own pace without having to install a myriad of software packages. Aside from these main features other features including group support and random assignment collections are also included in this project. Finally, as with any other software engineering project we include our requirements analysis, system analysis, project process, and take an in depth look at the testing of such a diverse and complex system.","WebLab; LabBack; education; automatic grading","en","bachelor thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Software Computer Technology","","Software Engineering Research Group","",""
"uuid:51041c5c-66c2-4d47-bf20-91459a538f7d","http://resolver.tudelft.nl/uuid:51041c5c-66c2-4d47-bf20-91459a538f7d","A New Methodology for the Development of Simulation Workflows: Moving Beyond MOKA","Chan, P.K.M.","Van Dijk, R.E.C. (mentor)","2013","One of the main challenges in Multi-disciplinary Design Optimisation (MDO) is the interoperability of heterogeneous simulation tools. Some researches have reported that, due to these interoperability issues, only around 20% of the product development time is spent on analyses and creative design tasks. Clearly, there is a lot to gain, when it comes to improving this figure. Key to the success of MDO is Knowledge Based Engineering (KBE) and Simulation Workflow Management (SWFM) technologies. However, developing KBE and SWFM applications requires a substantial amount of programming knowledge and expertise. Due to these constraints, the technologies are less accessible to non-programmers. Additionally, there is an increased risk that applications may become black boxes when it is not clear what knowledge went into the application. This complicates sharing and reusing knowledge in future projects. Therefore, a methodology is needed to avoid these complications. MOKA, the most well-known methodology for developing KBE applications, focuses on the KBE side rather than SWFM side of design systems. Therefore, a new methodology is developed that covers SWFM. Meanwhile, the aim is to reduce the amount of required expertise for modelling simulation workflows. The methodology presents new step-by-step instructions to guide engineers in the modelling process. Furthermore, the methodology introduces new forms, the Business Process Model and Notation (BPMN), and an N2 notation to capture and structure process knowledge. This knowledge is then formalised (i.e. translated to a format which is closer to computer language) before the workflow is automatically generated in Optimus, a SWFM system for building executable workflows. For this purpose, a new integration framework has been developed, based on the Integrated Design and Engineering Architecture (IDEA), which evolved from the Design and Engineering Engine (DEE). The new framework couples a Knowledge Base (KB), product (KBE) and process (SWFM) tools. Reducing the required expertise is achieved by introducing High-Level Activities (HLA). Capturing lower-level knowledge in these HLAs allows for inexperienced engineers to model workflows at a higher abstraction level. Meanwhile, a new parametric high-level workflow has been designed, that enables engineers to optimise KBE product models without actually modelling a workflow. Both the HLAs and the parametric workflow are used in several use cases involving a packaging design optimisation and an MDO workflow for thermoplastic injection moulding. In the end, this work has delivered tools, methods, and a framework that increases transparency of SWFM applications, saves development time, and reduces required expertise to model simulation workflows.","Knowledge Based Engineering (KBE); Simulation Workflow Management (SWFM); automation; knowledge engineering; methodology; MOKA; Multi-disciplinary Design Optimization (MDO); engineering design framework; automatic workflow generation","en","master thesis","","","","","","","","","Aerospace Engineering","Flight Performance and Propulsion","","","",""
"uuid:7500de93-fcaf-460e-a938-fdc6324d5e7b","http://resolver.tudelft.nl/uuid:7500de93-fcaf-460e-a938-fdc6324d5e7b","Specifying requirements for Automatic Generalisation of Electronic Navigational Charts","Socha, W.","Stoter, J. (mentor); Van Oosterom, P. (mentor)","2012","This short summary helps to grasp the motive behind the research, its objectives and to find out what is presented on the following pages of the report. It offers a condensed, one page recapitulation of its contents and intentions and suggests who might be interested to read it. CONTEXT Map generalisation is a tedious task, requiring skilled cartographers to work for long periods of time. Experience shows that compiling a map can take several months. It is the common wisdom that such labour?intensive tasks should be consigned to computers and thus be accomplished more uniformly, more precisely, more rapidly, and at much reduced cost (Buttenfield & McMaster, 1991). The benefits of automatic generalisation could aid hydrographic offices (HOs) in their ENC creation. OBJECTIVES The aim of the project is to create ‘hard knowledge’ specifications that could be subsequently used to create/use with tools for automatic generalisation of ENCs. The research compiles requirements of various HOs with the recommendations of S?4 and knowledge in model and cartographic generalisation of topographic charts to create computer translatable rules that allow creating a smaller scale/usage ENCs from a higher scale/usage ENC / S?57 data without or with minimum human interference. DELIVERABLES AND THEIR IMPACT The final report present a set of specifications, rules and tools that allow going from one compilation scale (Approach) to another (Coastal) without or with minimum human interference. It also discusses shortcomings and rate of success of such approach. The study mainly bases on the existing generalisation operators available in the literature, but where it is just? points out scarceness of the choice and proposes new solutions. As a result, an IHO standard could be created for the generalisation of charts (ENCs) and tools implemented in the software used for chart creation. WHO SHOULD READ THIS REPORT? This report might be found interesting by the GIS community, especially when interested in advancements in digital cartography and ENCs. The main recipients, however, are the hydrographic community, mainly Hydrographic Offices, and hydrographic software vendors. They may find ideas for potential implementations that could aid their business. The secondary recipients could be other parties linked to Electronic Navigational Charts, namely ECDIS producers and chart users. The author hopes that this research could also inspire other projects on automatic chart generalisation and complement projects on bathymetric generalisation.","Nautical Charts; ENC; Electronic Navigational Charts; Automatic Generalisation; Generalisation Operators; Multiscale databases; Cartography; Hydrographic Offices; Chart Requirements; S?57; Safety of Navigation","en","master thesis","","","","","","","","","OTB Research Institute for the Built Environment","GIS technology","","GIMA","",""
"uuid:8c683733-546a-4fd2-8303-a2cf2edf3cd8","http://resolver.tudelft.nl/uuid:8c683733-546a-4fd2-8303-a2cf2edf3cd8","LabBack: An extendible platform for secure and robust in-the-cloud automatic assessment of student programs","Vergu, V.A.","Visser, E. (mentor)","2012","In software engineering education manual assessment of students’ programs is a time consuming endeavour that does not scale to high numbers of pupils or to large amounts of practice material. Numerous automatic grading programs, aim- ing to alleviate this problem, have been developed since 1960. These, however, only support fixed programming languages and fixed assessment methods, thus prohibiting their reuse throughout different programming curricula. Educators investigating new grading methods either have to accept the engineering burden of creating a complete grading system or revert to manual grading. This thesis presents LabBack - a reusable automatic grading platform that is extendible with language- and assessment-specific functionality by means of plugins. LabBack provides the necessary infrastructure for building and hosting automatic grading functionality, eliminating the need to consider the issues of scalability and security against malicious programs. LabBack can be hosted in the cloud and provides immediate student feedback. Plugins providing automatic assessment for Scala, JavaScript and C have been implemented and LabBack has been validated in a university-level course with over 100 students.","automatic grading; scalability; cloud; plugin; plugin framework; load distribution; load packing; education; programming education","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Computer Science","","Software Engineering","",""
"uuid:c7e67b12-6dc1-4b70-b1a9-443ad0730e60","http://resolver.tudelft.nl/uuid:c7e67b12-6dc1-4b70-b1a9-443ad0730e60","Automatic detection of benthos & birds: Microphytobenthos cover and bird number detection on the Galgeplaat mudflat using terrestrial imagery","Rammos, P.","Lindenbergh, R. (mentor)","2012","Ecological monitoring, i.e. the process of assessing the quality and health of natural habitats, is required to assess the impact of anthropogenic influence. This process is often inhibited by the expenses involved and a limited accessibility to the study site. The use of remotely sensed data then logically comes to mind as a potential solution. This thesis focuses on the ecological monitoring of an intertidal mudflat located in the Oosterschelde known as the Galgeplaat. It possesses a monitoring platform standing 15 meters tall producing imagery that was previously used for monitoring the morphology of the mudflat. The goal is to examine the potential of using this available terrestrial imagery for ecological monitoring of the mudflat and whether it is possible to do this automatically. Two separate case studies were formulated to investigate this potential; 1. automatic detectionof microphytobenthos and 2. automatic detection of bird numbers. The primary focus in this thesis was put on the microphytobenthos case study which elicited the most interest from involved parties. The main inhibiting factor for microphytobenthos detection was the presence of macroalgae (in particular brown macroalgae) in the images, which possess similar spectral properties to that of microphytobenthos. Two methods were used to detect microphytobenthos: I. maximum likelihood classification combined with the masking of the macroalgae (the undesired target) and II. Kohonen’s self organising maps (SOM). The results of this case study indicated that distinguishment between microphytobenthos and macroalgae was best achieved with the Self organizing map (SOM) approach. For the detection of bird numbers consecutive snapshot images of the camera were used such that the motion of birds could be taken advantage of. Background subtraction using a weighted mean background image and a standard deviation image was the most promising of the methods used to count the birds in the 20 frame video sequences. The video sequences with the least zoom (far scale) produced the most erroneous detections. This probably as a result of the lack of movement visible in the video and the small size of the birds (more interference from noise). The results suggest ecological monitoring, in this case of microphytobenthos cover and bird numbers on the Galgeplaat, is indeed possible by using the available terrestrial imagery from the platform. In the current state of development however, the process cannot be claimed fully automatic yet as some a priori knowledge from the user’s part is still required. Regardless, the use of remotely sensed imagery for ecological monitoring proves promising in comparison with current ecological monitoring for several reasons. Provided that unsuitable images are filtered out (images where raindrops are on the camera lens, or other irreparable images) the problem of limited accessibility to the mudflat is ruled out. This makes it possible to produce high time resolution data. Additionally, no costs are required for lab work or transport to the mudflat for bird counting and the collection of specimen.","image processing; ecology; automatic detection","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Geoscience and Remote Sensing","","Msc. Geomatics","",""
"uuid:78807acb-4115-478c-93de-68b9db884c8e","http://resolver.tudelft.nl/uuid:78807acb-4115-478c-93de-68b9db884c8e","Validation and Automatic Repair of Planar Partitions using a Constrained Triangulation","Arroyo Ohori, G.A.K.","Van Oosterom, P.J.M. (mentor); Ledoux, H. (mentor); Meijers, B.M. (mentor)","2010","Planar partitions (subdivisions of the plane into polygonal areas) constitute one of the most important data representations in GIS. They are used to model concepts as varied as land use, administrative units, natural features and cadastral parcels, among many others. However, since polygons are often stored separately, different errors and inconsistencies are introduced during their creation, manipulation (both manual and automatic) and exchange. These come in the form of invalid polygons, gaps, overlaps and disconnected polygons, which severely hampers their use in other software. Existing approaches to solve this problem usually involve polygon repair using a list of constraints, and complex planar partition repair operations performed on a planar graph. However, these have many shortcomings in terms of complexity, numerical robustness and difficulty of implementation. Moreover, they leave many invalid cases untouched. To solve this problem, a novel method to validate and automatically repair planar partitions has been developed. It uses a constrained triangulation of the polygons as a base, which being by definition a planar partition, means that only relatively simple operations are needed to ensure that the output becomes valid. Point locations are maintained throughout the process, while fully automatic repair is possible using customisable criteria. This approach is also extensible to individual polygons, is capable of handling a larger variety of cases and has good performance compared to existing alternatives; all of this with numerical robustness and maintaining topological consistency throughout. In order to analyse, test and improve the developed algorithms, and encourage further development, a fast and efficient implementation has been written in C++, which has been tested with several large data sets and compared with other available software, regarding both performance and functionality. This prototype is able to successfully repair planar partitions of more than 100,000 polygons. It is also open source and freely available on the GDMC website (http://www.gdmc.nl/).","planar partitions; polygonal coverages; topology; validation; automatic repair; simple features; constrained delaunay triangulation","en","master thesis","","","","","","","","2010-08-27","","GIS technology","","Geomatics","",""
"uuid:f0559ad8-1356-42bc-ab8b-c9e1677a2659","http://resolver.tudelft.nl/uuid:f0559ad8-1356-42bc-ab8b-c9e1677a2659","A New type of body-powered prosthesis: Using wrist flexion instead of shoulder movement","Nieuwendijk, J.","Van der Helm, F.C.T. (mentor)","2010","Body powered prostheses have many advantages: They are reliable, lightweight and relatively cheap. The disadvantage is the need of a shoulder harness, which causes discomfort, pain and trouble donning and doffing the prosthesis. The goal of this thesis is to develop a body-powered prosthesis without the need for a shoulder harness. This is realized by making a design that uses passive wrist flexion of the prosthesis itself to operate the grasping mechanism. The force and displacement are converted to a grasping motion by using a hydraulic system. The grasping force is enhanced by a pressure intensifier and holding an object is achieved by including an automatic lock.","prosthetics; hydraulics; two-phase mechanism; automatic locking; curved hydraulic cylinder","en","master thesis","","","","","","","","2010-08-27","Mechanical, Maritime and Materials Engineering","BioMechanical Engineering","","BMD","",""
"uuid:dccc1188-0e63-44c5-ba98-5476d65f20c6","http://resolver.tudelft.nl/uuid:dccc1188-0e63-44c5-ba98-5476d65f20c6","Building a visual speech recognizer","Driel, K.F.","Rothkrantz, L.J.M. (mentor)","2009","This thesis describes how an automatic lip reader was realized. Visual speech recognition is a precondition for more robust speech recognition in general. The development of the software comprised the following steps: gathering of training data, extracting meaningful features from the obtained video material, training the speech recognizer and finally evaluating the resulting product. First, research was done to gain insight on the theoretical aspects of automatic lip reading, the state of the art, speech corpus development, face tracking and feature extraction. Gathering training data came down to the recording and composing of a new audio-visual speech corpus for Dutch. With frontal and side images of 70 different speakers recorded at a frame rate of 100 frames per second this is the most diverse corpus currently in existence. Analysis of the new data corpus shows an increase in quality compared to other corpora. Visual information is obtained by searching the video footage. Using Active Appearance Models, points of an a priori defined model of the lower half of the face are tracked over time. Based on the model point coordinates, distance and area, features are computed that are used as input to the speech recognizer. Training was accomplished by presenting labeled training data to viseme-based Hidden Markov Models that model speech production. In a few steps the model parameters were adjusted, so that it could be used to perform recognition of visual speech signals from then on. The recognizer was implemented using tools from the Hidden Markov Model Toolkit. The results of a visual speech recognizer based on training data from a single person depend on the utterance type of the unlabeled data. For the simple word-level task of digit recognition 78% was recognized correctly with a word recognition rate of 68%. For letter recognition tasks it did not perform nearly as well, but considering the limitations that the use of visemes over phonemes imposes, these results are at the expected level. The data corpus and visual speech recognizer will be a valuable asset to future research.","automatic lip reading; visual speech recognition","en","master thesis","","","","","","","","","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:0cb78fd5-5bac-46f1-9ab3-e1f56a8e68de","http://resolver.tudelft.nl/uuid:0cb78fd5-5bac-46f1-9ab3-e1f56a8e68de","Automatic speech recognition using dynamic Bayesian networks","Van de Lisdonk, R.H.M.","Wiggers, P. (mentor); Rothkrantz, L.J.M. (mentor)","2009","New ideas to improve automatic speech recognition have been proposed that make use of context user information such as gender, age and dialect. To incorporate this information into a speech recognition system a new framework is being developed at the MMI department of the EWI faculty at the Delft University of Technology. This toolkit is called Gaia and makes use of Dynamic Bayesian networks. In this thesis a basic speech recognition system was built using Gaia to test if speech recognition is possible using Gaia and DBNs. DBN models were designed for the acoustic model, language model and training part of the speech recognizer. Experiments using a small data set proved that speech recognition is possible using Gaia. Other results showed that training using Gaia is not working yet. This issue needs to be addressed in the future and also the speed of the toolkit.","automatic speech recognition; dynamic bayesian network; dbn; asr; Gaia","en","master thesis","","","","","","","","2009-07-11","Electrical Engineering, Mathematics and Computer Science","Mediamatics","","","",""
"uuid:00f12183-2577-44c5-a8b1-77287681779b","http://resolver.tudelft.nl/uuid:00f12183-2577-44c5-a8b1-77287681779b","Automatic Registration of laser scanning data and colour images","Sablerolle, S.A.","Bucksch, A. (mentor)","2006","Laser scanning is a new technique for three dimensional point measurements. Over the last decades the number of applications in which these measurements take place, has grown significantly. In every application the surveyors are looking for the same result, namely a three dimensional point cloud of the object of interest. The purpose of this research is to improve the quality of the visualization: the objective is to automatically register a laser scanner point cloud with colour images. This objective is based on the developed data acquisition of Zoller + Fröhlich GmbH, Wangen im Allgäu, Germany: a digital camera is fixed on the laser scanner close to its centre. The instrument produces a full scan and colour images of its environment. The automatic registration of laser scanning data and colour images designed in this research is successfully implemented by making use of gradient operators, edge detectors and descriptors. Points of interest are found and matched in both the intensity image of the laser scanner and the colour images of the camera. The matched points are used to determine the external parameters of the colour image in object space with a non-linear rigid optimization. These parameters form the connection between the colour points and the laser 3D coordinates. Visualization can be made now by either plotting the colours onto the 3D points or plotting the range data onto the colour image.","laser scanning; automatic registration","en","master thesis","","","","","","","","","Civil Engineering and Geosciences","Geomatics","","Geomatic engineering","",""
"uuid:3c130adc-9e4a-4d54-93a0-74330b17599d","http://resolver.tudelft.nl/uuid:3c130adc-9e4a-4d54-93a0-74330b17599d","Research, design, simulation and implementation of an automatic flight control system for a real-time flight simulator","Brijl, B.R.","Van den Boom, A.J.J. (mentor); Levy, A. (mentor)","2003","Vertigo Flight Simulation is a company located in The Netherlands with its mission being to design, produce and sell low cost flight simulation devices to all levels of the flight training market. One of the products which is developed by Vertigo Flight Simulation is the Vertigo P-2 Trainer. The P-2 Trainer is a fixed-base flight simulator capable of simulating single and twin engine piston aircraft with Flight & Navigation Procedures Trainer Type II (FNPT II) approved aerodynamic and engine models. One important feature which is not yet available for the P-2 Trainer is an Automatic Flight Control System (AFCS) which contains an AutoPilot (AP) and a Flight Director (FD). This system is developed, which includes research, design and implementation, in this thesis by the author as its Master of Science (MSc) Thesis Project for obtaining his Engineering Degree in Electrical Engineering, specialization Avionics. An Automatic Flight Control System has two main functions: the first is to control the aircraft without the need for the pilot to fly the aircraft and the second function is to present the pilot with suggestions how he or she should control the aircraft to follow a certain, by the pilot chosen, attitude, course, altitude or flight plan / flight path. During research, it was found that Automatic Flight Control Systems are basically all designed and based on classical control system theory: inner-loops are used to control the aircraft (these are called the control loops) and outer-loops are used to guide the aircraft (these are called the guidance loops). The problem initially found in implementation of the system was to tune the various inner- and outer-loop gains to give desired aircraft-responses. This problem was overcome by approaching the problem from the classical control system theory point of view. Because the P-2 Trainer is developed with flexibility in mind, it uses a modular design to be able to easily change the aerodynamic aircraft-model and instruments to simulate a new type of aircraft. Because of this design, the Automatic Flight Control System must also be a flexible system. Since the method of simulation, testing and tuning the system is extremely time absorbing, a method will be presented to let the system adapt itself every time a new aerodynamic model is loaded in the flight simulator.","automatic flight control system; autopilot; flight simulation; afcs; avionics","en","master thesis","TU Delft, Electrical Engineering, Mathematics and Computer Science, Control and Simulation","","","","","","","","Electrical Engineering, Mathematics and Computer Science","","","","",""