Research Projects

Research Projects

Marmara University Computer Engineering Department focuses on research at doctoral level in several areas including software engineering, parallel and distributed computing, multi-core computing, embedded systems, artificial intelligence, data mining, image and video processing, computer vision, natural language processing, bioinformatics, computer networks, wireless networks, instrumental networks, optimization and game theory.

There are a number ongoing TÜBİTAK Projects and Horizon 2020 Project and BAP Projects in these fields that   are guided by our our faculty members. In this section,  we present a short summary of all ongoing and completed TUBITAK projects. The outcomes of the TUBITAK projects as well as the details of BAP projects and other type of projects  can be directly accessed from the web pages of our faculty members.


Ongoing Projects

1. TUBITAK 3501 Project

Title: Energy-Aware Combinatorial Resource Allocation and Scheduling Models for Cloud Computing

Principal Investigator: Assist. Prof. Ali Haydar Özer

Abstract: Cloud computing provides a flexible platform that offers a variety of computing, storage and application services which are virtually instantiated upon the request of users and that support the pay-as-you-go pricing model. In order to provide cloud services, physical infrastructures that use high amounts of energy are required. This high energy usage also causes increased carbon emissions. Reducing the energy use of cloud infrastructures which also causes reduced carbon footprint is crucial to ensure the sustainability of cloud computing services. In this project, to reduce the carbon footprint of cloud infrastructures, resource allocation and scheduling problems are addressed. In the Energy-Aware Combinatorial Resource Allocation Model, users will be able to report their complex resource allocation requests to the cloud provider using a proposed resource request language. With this language, users will be able to define resource sets required for their workflows by using logical AND and OR operations, and they can also enforce that they only want one request to be satisfied among a group of requests. In the Energy-Aware Combinatorial Resource Scheduling Model, on the other hand, the resource scheduling problem is addressed in which users will be able to specify resource requests along with scheduling information. Furthermore, users will also be able to define priorities for dependent tasks in their workflow. The main objective of the project is to ensure sustainable development by minimizing the negative contribution of cloud infrastructures to global climate change.

 

2. TUBITAK 1001 Project

Title: Soft Biometric Data Extraction from Spontaneous Multimodal Affective Face Videos

Principal Investigator: Prof. Dr. Çiğdem Eroğlu Erdem

Abstract: The goal of this project is develop a soft biometric system to support static texture-based face recognition systems under challenging conditions (e.g. low resolution, spoofing attacks etc.) and to make them more reliable. The soft biometric features, which will be extracted from the face will not be restricted to age and gender, but it will also include the person-specific component of the facial dynamics during an affective facial expression or speech. The component of the dynamics of the face, which carry biometric information will be treated like a "signature" of that person and will be fused with the texture based information for face recognition or verification.


3. TUBITAK 1001 Project

Title: Development of Distributed Word Representations that Extracts Semantic Relations for Supervised and Semi-Supervised Classification on Language Processing

Principal Investigator: Asst. Prof. Murat Can Ganiz

Abstract: This project aims to improve vector-based representations of words for improving the processing of human natural language with computers. There is an ever-increasing public and scientific interest in Artificial Intelligence, yet one of the main challenges in that field – effectively interpreting human natural language with computers – is still a challenging problem. Interacting with computers in natural language requires a representation of words, their meaning, and their meaning in context with other words. A recent development towards that goal are Vector Space Models (VSM) for words. VSMs represent each word by a high-dimensional vector which is learned automatically from a large unannotated natural language corpus. One of the pioneers of this method is Tomas Mikolov (Mikolov et al., 2013a, 2013b) who created a neural-network based model for automatically learning word vectors based on the distributional hypothesis (Harris, 1954) which says that you “shall know a word by it's company”. This model and its Open Source software Word2vec is one very popular example of Deep Learning (LeCun, 2015) and has received significant interest in the scientific community within a short time with many extensions and improvements being published. As Word2vec learns from the context of a word, i.e., where it is usually `embedded' in a text, this method is also known as Word Embeddings. Embeddings have been used to improve natural language processing (NLP) tasks such as Named Entity Recognition, Part Of Speech Tagging and Dependency Parsing. Once trained, Word2vec word vectors have several interesting properties, for example subtracting the vector for France from the vector for Paris and adding the vector for Germany we would obtain a vector that is very close to the vector that has been learned for Berlin. The fascinating and useful property here is that these embeddings have been learned without any human annotations methods which process natural language examples from the Internet.


Completed Projects

 

1. Horizon 2020 Project

Title: Innovative Modeling Approaches for Production Systems to raise validatable efficiency (Improve)

Principal Investigator: Assoc. Prof. Borahan Tümer

Abstract: The rise of the system complexity, the rapid changing of consumers demand require the European industry to produce more customized products with a better use of resources. The main objective of IMPROVE is to create a virtual Factory of the Future, which provides services for user support, especially on optimization and monitoring. By monitoring anomalous behaviour will be detected before it leads to a breakdown. Thereby, anomalous behaviour is detected automatically by comparing sensor observation with an automatically generated model, learned out of observations. Learned models will be complemented with expert knowledge because models cannot learn completely. This will ensure and establish a cheap and accurate model creation instead of manual modelling. Optimization will be performed and results will be verified through simulations. Therefore, the operator has a broad decision basis as well as a suggestion of a DSS (Decision Support System), which will improve the manufacturing system. Operator interaction will be done by a new developed HMI (Human Machine Interface) providing the huge amount of data in a reliable manner. The basis for IMPROVE are industrial use-cases, which are transferable to various industrial sectors. Main challenges are reducing ramp-up phases, optimizing production plants to increase the cost-efficiency, reducing time to production with condition monitoring techniques and optimise supply chains including holistic data. Consequently, the resource consumption, especially the energy consumption in manufacturing activities, can be reduced. The optimized plants and supply chains enhance the productivity of the manufacturing during different phases of production.

 

2. TUBITAK 3501 Project

Title: Incentive Mechanisms for User Provided Networks

Principal Investigator: Asst. Prof. Ömer Korçak

Abstract: The increasing mobile data requirement on mobile devices has led to the concept of User Provider Network (UPN), which is based on sharing the users' Internet connections.  According to this approach, users with cellular connection share their own connections with others with limited or no cellular connection. An important research problem in the UPNs is about incentivizing users to participate in the system. Designing incentive mechanisms for UPNs is a quite new topic that attracts interest of both the academy and the industry in very recent years. In this project, we will study incentive mechanisms for both self-organizing autonomous UPNs, and operator-controlled UPN services. Energy consumption, data usage costs, and utility functions will be modelled carefully by performing empirical measurement-based studies as well as characterizing realistic user demands. Strategic decisions of users and operators will be studied using game theory and mechanism design tools. The main goal of this project is to design incentive mechanisms that yield efficiency, fairness, individual rationality, truthfulness and trustworthiness.

 

3. TÜBİTAK 1001 Project

Title: Developing exact and heuristic solution methods for the combined solution of obstacle neutralization and random disambiguation problems

Principal Investigator: Assoc. Prof. Ali Fuat Alkaya

Abstract: Many work have been done on the random disambiguation paths problem (RDPP) which originates from a military navy scenario where the troops try to find their paths on minefield or from a similar scenario where the merchant ships try to navigate safely and swiftly in an icy environment.  Another problem which attracted the interest of researchers is the obstacle neutralization problem (ONP) which is closely related with the real life problems like routing in telecommunication industry, curve approximation, finding the minimum risk paths for military air vehicles.  Besides, both problems take place in applications of autonomous robot navigation in robotics. In the problem that we are going to tackle in this project, the agent is allowed to make K1 disambiguations and K2 neutralizations.  This problem has not been defined in the literature and we will call it as Disambiguation or Neutralization Problem (DNP).   We can easily argue that DNP is an NP-Hard problem because the RDPP and ONP are both proven to be NP-Hard.  DNP is a problem which can be faced in real-life as RDPP and ONP. Path planning problem for rescuer robots which have to avoid areas like high heat, radiation, slippery grounds and moves by extinguishing extinguishable fires or ships which move on by breaking through breakable icebergs or deciding unbreakable icebergs which must be passed by turning around are the two problems of these problems. The objective of this project is to determine best protocol for which satisfies the least cost for the agent to reach the destination where the agent has both disambiguation and neutralization capability.  In that context, an exact algorithm will be developed firstly and then several heuristic algorithms will be developed and their performance will be put forward by computational tests.

 

4. TÜBİTAK 1001 Project

Title:  A Cross-Layer Reliability Optimization Framework for Manycore Architectures

Principal Investigator:  Prof. Dr. Haluk Rahmi Topcuoglu

Abstract:  In this Modern architectures are vulnerable to soft errors due to shrinking transistor sizes and high frequencies. Cache structures in a multicore system are more vulnerable to soft errors due to high transistor density. Protecting all caches unselectively has notable overhead on performance and energy consumption.  In this project, we propose asymmetrically reliable caches to supply reliability need of the system using sufficient additional hardware under the performance and energy constraints.   In our reliability optimization framework, a chip multiprocessor is composed of at least one high reliability core which has ECC protection on its L1 cache, and a set of low reliability cores which have no protection on their L1 caches. Application threads are mapped on the different cores in terms of reliability based on their critical data usage. In our framework, reliability-based critical code regions are assumed as the high priority functions which are extracted by examining execution time percentages and call graph, statically.  In this system, software threads which execute reliability-based critical code regions are mapped onto the protected cores, whereas the threads which execute non-critical regions are mapped to the unprotected ones, dynamically during the execution.  Our framework benefits preserving reliability-based critical regions of the applications exclusively by providing notable power and cost savings with close performance and reliability values for a set of functions reported in experimental results.  As part of the project, we propose and evaluate various scheduling algorithms for mapping the application threads on the protected cores. In our first approach, we started with a primitive scheduler which is based on First Come First Served (FCFS) policy. Different types of priority-based scheduling and equal-time based scheduling techniques are proposed and utilized in the later phases of the project.  


5. TÜBİTAK 3501 Project

Title: Interpreting Natural Language using Answer Set Programming, Inconsistency Management, and Relevance Theory 

Principal Investigator: Asst. Prof. Peter Schüller

Abstract: In the Inspire project we propose to advance the field of Natural Language Understanding (NLU) by combining techniques of Answer Set Programming, Inconsistency management, and Relevance Theory for interpreting natural language. Natural language is a very efficient form of communication: humans leave out many details when they use language because other humans can easily fill these details. As an example, 'morning coffee' means 'coffee drunk in the morning' while 'morning newspaper' means 'newspaper read in the morning', and humans understand that without effort although neither 'drunk' nor 'read' is visible in the text. This underspecification, often combined with a high ambiguity of natural language, is a big challenge for NLU systems. The Inspire project aims to advance scientific methods that allow computers to interpret natural language text with the goal of recovering its intended meaning using background knowledge bases (WordNet, FrameNet). Within the scope of the Inspire project, 4 SCI-E articles, 8 conference and workshop papers, and one PhD thesis were published.


6. TÜBİTAK 1001 Project

Title: Application Scheduling and Optimization for Chip Multiprocessor (CMP) Architectures

Principal Investigator:  Prof. Dr. Haluk Rahmi Topcuoglu

Abstract:  In this project, we developed static and dynamic techniques for mapping application threads onto cores of a given chip multiprocessor (CMP) architecture.  For static application mapping case, we designed two novel parallel formulations for the Barnes-Hut on the Cell Broadband Engine architecture by considering technical specifications and limitations of the Cell architecture. Our experimental evaluation indicates that this application performs much faster on the Cell architecture compared to the reference architecture, an Intel Xeon based system. Our first system for dynamic application mapping assigns application threads onto cores and maps the data they manipulate onto available on-chip memory components. In addition to the dynamic application mapping approach, a locality-aware dynamic mapping algorithm is proposed in this study, which targets to assign computations with similar data access patterns to the same core.  In addition to our performance-aware application mapping techniques, a novel metric called “Thread Vulnerability Factor (TVF)” was proposed and developed as part of this project in order to measure the reliability of multi-threaded applications.  The TVF metric measures vulnerability of a thread against transient errors; and the TVF calculation for a given thread (which is typically one of the threads of a multithreaded application) does not depend on its code alone, but also on the codes of the threads that share resources and data with that thread.  Due to its efficiency of determining less reliable or less fault tolerant threads of a multithreaded application, our TVF metric will provide a vital role in order to propose reliability-aware application mapping strategies.


7. TÜBİTAK 1001 Project

Title: Locating and Utilizing Sensors with Hybrid Evolutionary Algorithms in a Synthetically Generated  Landscape

Principal Investigator:  Prof. Dr. Haluk Rahmi Topcuoglu

Abstract: Deploying and configuring multiple sensors for acqusition of a given area is one of the fundamental research areas of researchers from various fields including computer vision, autonomous systems and robotics. In this project, we presented a novel multi-atribute utility theory based model and a framework for determining the types and number of sensors, locating the selected sensors and setting their orientational sensor-specific parameters including heading and tilt angles on a synthetically generated 3-D terrain with multiple objectives. Our model relies on rational trade-off between the three conflicting objectives which are maximizing the coverage area while maintaining the maximum stealth, and minimizing the total acquisition cost of deploying the sensors.  In addition to theoretical foundations, we developed a new hybrid evolutionary algorithm for the sensor placement problem. We incorporated new and specialized operators for hybridization, including problem-specific heuristics for initial population generation, intelligent variation operators (Contribution-Based Crossover operator and Proximity-Based Crossover operator) which comprise problem specific knowledge, and a local search phase. The experimental study validates finding the optimal balance among visibility-oriented, stealth-oriented and cost-oriented objectives. In this project, we also developed a new hybrid solution for path planning of moving sensors with different access characteristics including different slopes of climbing and different minimum turning angles. 


8. TÜBİTAK 3001 Project

Title:  Open-Minded Coreference Resolution Sieve Based on Answer Set Programming

Principal Investigator: Asst. Prof. Peter Schüller

Abstract: In the OmSieve project we will apply methods of Answer Set Programming to coreference resolution. We want to deal with linguistic ambiguities on all levels of representation in a way that is more open-minded than existing methods. Answer Set Programming (ASP) is a general purpose logic programming formalism that supports comfortable representation of knowledge, nonmonotonic reasoning processes, and reasoning with hybrid knowledge bases. In an ASP logic program we describe (i) a set of potential solutions, (ii) relationships between concepts in the solution, and (iii) constraints on solutions. Given such a representation an ASP solver (a software tool) computes those solutions that adhere to the specified relationships and constraints. Coreference resolution is the Computer Linguistics task of finding out which phrases of a natural language discourse refer to the same entity in the world. Coreference resolution is challenging: noun phrases can refer to the same entity for various reasons, they can be synonyms, hypernyms, or hyponyms, or they can be coreferent because of background knowledge and discourse information. In this project we want to use Answer Set Programming to create an open-minded coreference resolution method. We also aim to create a Turkish method for Coreference Resolution. As a result of the OmSieve project, 2 SCI-E articles, 4 conference papers, and one MSc thesis was published.


9. TÜBİTAK 3001 Project

Title: Development and application of an exact solution method and metaheuristics to the neutralization problem

Principal Investigator: Assoc. Prof. Ali Fuat Alkaya

Abstract: Within the scope of this project theoretical and computational research has been conducted for developing new solution methods for Neutralization Problem (NP) which is actually a military navy scenario and it's inspired from a real problem. An exact method that can be used on online applications is developed for the neutralization problem.  Performance of the proposed exact method is compared with a non-commercial solver named SCIP and it is observed that the optimal solution is obtained in a drastically shorter time with the proposed exact method.  In the cases where the proposed exact method does not perform effectively in terms of run time, we used metaheuristics.  Ant colony algorithm (ACA), genetic algorithm (GA) migrating birds optimization algorithm (MBO) and simulated annealing (SA) are developed and applied to the EP.

 

 


This page updated by Computer Engineering on 06.08.2019 16:48:31

QUICK MENU