Current Research Projects and their Significance

Massive Uncoordinated Multiple Access design, 5G wireless and the Internet of Things

Joint project with Prof. Jean-Francois Chamberland and Prof. P.R. Kumar
Sponsor - National Science Foundation

The wireless landscape is poised to change, once again, within the next few years with the emergence of machine-driven communications, an important component of the Internet of Things (IoT). This new reality will impact the statistical profile of typical wireless traffic. Indeed, in addition to having individuals interact with personal phones or mobile computers, a sizable portion of wireless resources will be consumed by legions of unattended devices that seek to disseminate information in a random fashion. For instance, a base station in a fully deployed IoT system may need to serve several hundreds of clients concurrently. Currently deployed access schemes based on sustained connectivity, channel estimates and scheduling policies are ill-equipped to deal with data packets generated sporadically by a myriad of wireless agents. The overarching goal of this research initiative is to address this deficiency and devise novel access schemes tailored to massive uncoordinated and sporadic multiple access, thereby readying wireless infrastructures for the traffic of tomorrow. The innovative aspect of this research initiative lies in exploring and exploiting new connections between multiple access and coding theory. By leveraging these connections, our results show that even when the transmitters are distributed, and they attempt to transmit their information in an uncoordinated way, interference between them can still be managed very efficiently and their throughput can be nearly as good as if they were coordinated ! This project embraces the evolving perspective of harnessing interference in wireless networks rather than fighting it or avoiding it. This viewpoint underlies many recent successes in network coding and distributed storage. This project brings forth such a perspective in the design of large-scale wireless networks.


Enhancing Radio-Frequency Spectrum Through Interference Resilient Cognitive Radio Systems: Design, Performance Analysis and Optimization

Joint project with Prof. Aydin Karsilayan, Prof. Jose Silva-Martinez, Prof. Erchin Serpedin
Sponsor - National Science Foundation

The continuous increase in the number of wireless devices and sensors along with the huge demand for higher data rates and limited radio frequency spectrum resources have prompted the need for novel wireless communications technologies with improved radio frequency spectrum sharing features. Recent radio spectrum measurements have corroborated the fact that the radio spectrum is being used inefficiently; and consequently, the concept of cognitive radio has been proposed as a promising approach for the efficient utilization of the radio frequency spectrum. A cognitive radio represents a communication system equipped with the abilities to learn its surrounding environment through sensing and measurements and to adapt its features for a better utilization of existing radio frequency spectrum resources with the aim of securing communications links with adequate quality of service. This project addresses several important problems that must be overcome before cognitive radio systems can be implemented in practice. These challenges include the design of circuits with adequate precision for processing of the received signals, the design of fast and computationally efficient ways to sense the occupancy of the spectrum and the design and analysis of multiple access schemes that will permit many users to share the spectrum without causing undue interference to each other. This project will use innovative techniques to solve these challenges thereby enabling a better utilization of the available radio spectrum resources. Potential applications of the proposed work include radio astronomy, communication networks, smart grids, wireless sensing and monitoring devices, remote monitoring of earth, and telemedicine.


Advanced Coding Techniques for Next-Generation Optical Communications

Joint project with Prof. Gwan Choi, Texas A&M University and Prof. Henry Pfister, Duke University
Sponsor - National Science Foundation

In recent years, there has been an explosion of data traffic over the internet. With the popularity of video streaming, cloud computing, and the rapid dissemination of user-generated content through social networks, there is no doubt that this trend will continue. In order to support these services, the data rates carried over optical transport networks which constitute the internet backbone has been constantly increasing and this trend is expected to continue. While 100 Giga bits per second optical transport networks are being deployed, even conservative estimates predict that data rates in next generation optical transport networks will increase to 400 Giga bits per second in 2016, 1 Tera bits per second in 2019 and 10 Tera bits per second in 2025. As the data rate increases, optical signal-to-noise ratio of the fiber-optic channel decreases substantially and the bit error-rate increases. This project considers the design and analysis of advanced error-correcting codes that mitigate transmission errors and provide reliable communication for internet traffic. The design and implementation of advanced channel coding techniques at extremely high data rates is very challenging due to hardware constraints. This is exacerbated by the fact that the desired code rates are high (e.g., greater than 0.8) and the target bit error rates are extremely low (e.g., on the order of $10^{-15}$, i.e., on the average the system can make only 1 error out of one thousand billion or 1 peta transmitted bits). These constraints call for innovative ideas for the design of advanced channel coding techniques and cross-disciplinary interaction between researchers who focus on algorithm design and researchers who specialize in hardware implementation. The transformative nature of the project lies in the fact that several novel classes of codes and computationally-efficient decoders will be designed and analyzed. Another important aspect of this project is the design methodology which leverages the close interaction between algorithm design and hardware implementation which will result in the implementation of codes and decoders on field programmable gate arrays. The broader impacts of this project will be maximized by the planned initiatives that aim to expand the scope of the telecommunications and signal processing and very large scale integration (VLSI) curricula at Texas A&M University and Duke University. It will also promote collaboration in the design, development and implementation of educational activities between Texas A&M University and Duke University.


Interference-Aware Cooperation in Wireless Networks

Joint project with Prof. Bobak Nazer, Boston University and Prof. Behnaam Aazhang, Rice University
Sponsor - National Science Foundation

The classical approach to wireless communication is to isolate communication links by maximizing signal strength and minimizing interference between users. This simple philosophy is supported by a rich theoretical foundation which has inspired powerful coding techniques and protocols that lie at the heart of modern wireless systems. However, these systems have recently become victims of their own success as the rising density and data requirements of wireless devices have lead to a surge in interference. Fortunately, an emerging body of work indicates that the phenomenon of interference may in fact represent an untapped opportunity for increasing the spectral and energy efficiency of next-generation wireless systems. The key insight comes from the realization that interference at node is really the transmitted signal from another node and that there is some mathematical structure to these transmitted signals. If the structure can be leveraged carefully, then interference no longer needs to be treated as a nuisance. Although many interference-aware communication strategies have been proposed in the literature, the promised gains have been mostly limited to the theoretical realm. The objective of this project is to create practical interference-aware wireless protocols that can operate near the performance predicted by theoretical bounds in terms of throughput, energy efficiency, and reliability. The project is organized into three complementary thrusts that encompass theory, algorithms, and practice. The first thrust investigates novel algebraically structured codes. The second thrust aims to implement these protocols on a three-node WARP (Wireless Open-Access Research Platform) testbed. A series of carefully designed experiments will be used to compare the performance of interference-aware strategies while accounting for overhead costs. The third thrust leverages the data collected from these experiments to revise channel models to capture key features that impact the performance of interference-aware strategies such as asynchronism and channel fluctuations. These models will be use to revisit the theoretical foundations of interference-aware strategies and tailor them to the channels encountered in practice. This project includes several outreach efforts including undergraduate research experiences connected to the WARP testbed and the creation of a public repository of training modules and videos.


Coding Theory in Compressive Sensing of Big Data

Joint project with Prof. Simon Foucart, Department of Mathematics, TAMU
Sponsor - Looking for sponsors

In the era of big data, there is currently a critical need for sophisticated approaches to the acquisition (sensing), storage (compression) and processing of an unprecedentedly large volume of data. A new mathematical theory called Compressive Sensing explains that high-dimensional objects can be grasped from only a limited amount of information (samples?) when a hidden structure such as sparsity is exploited. It tells scientists how to acquire (sense) large data and compress them simultaneously, and then how to reconstruct them from their compressed versions. It is a theory that no data scientist can ignore in the future, in the same way as no scientist today can ignore how to solve linear systems of equations. In a nutshell, the theory considers underdetermined linear systems of equations of the form ${\bf{A}} \underline{x} = \underline{y}$ where the solution vector $\underline{x}$ is expected to be sparse. The dimensions of the problem are the size $N$ of $\underline{x}$, which is huge, the number $s$ of nonzero entries of $\underline{x}$, a.k.a its sparsity, which is moderate, and the dimension $m$ of the vector $\underline{y}$, interpreted as the number of measurements/observations made on $\underline{x}$, which is intended to be kept as small as possible - certainly much smaller than $N$ and ideally close to $s$. The rule of the game is to design sensing matrices $\bf{A}$ that allow one to reconstruct $s$-sparse vectors $\underline{x}$ from the mere knowledge of $\underline{y} = {\bf{A}} \underline{x}$ where $\underline{y}$ only contains $s \ln(N/s)$ measurements. To do so with high probability of success, it “suffices” to take the entries of $\bf{A}$ as independent Gaussian random variables. This realization has already found many applications in areas such as Signal Processing, Error Correction, Imaging, Machine Learning, Bioinformatics, Sensor networks and wireless communications, some of which have been elaborated on in the book coauthored by Prof. Simon Foucart (one of the investigators).

In parallel, coding theory has developed an impressive set of tools including algebraic constructions of matrices, asymptotic analysis of sparse graph ensembles (matrices) and low-complexity message passing algorithms that have produced near-optimal solutions to the problem of communicating reliably over noisy channels. These techniques have witnessed phenomenal success evidenced by their widespread use in cellular phones, Wi-Fi systems, hard disks, flash memories and distributed storage systems. The overarching research theme is to leverage and strengthen the connections between compressive sensing and coding theory to design novel sensing matrices and recovery algorithms.


Coding for Non-Volatile Memories and Storage Systems

Sponsor - Looking for industry sponsors

Most storage systems are built from physical media and components that are unreliable. Yet, in modern computing, we have come to expect nearly perfect reliability from our storage systems. When we store information on a hard disk, flash memory, or on a distributed storage systems such as dropbox or google drive, we never expect to see errors when we read our data. Error correction coding is a critical component of all such reliable storage systems. As new paradigms for storage evolve (non-volatile memories are replacing hard disks in many applications, for example), coding techniques need to be tailored to the unique characteristics of these new media and systems. Our primary goal is the development of codes and decoders for a variety of modern storage systems which are efficient in terms of redundancy and/or latency. Specifically, our recent results have been in the area of write-once memory (WOM) codes, half product codes for flash memories and joint source-and channel coding with polar codes.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License