Scientific Program

Conference Series Ltd invites all the participants across the globe to attend 7th International Conference on Biostatistics and Bioinformatics Chicago, Illinois, USA.

Day 1 :

Conference Series Biostatistics 2018 International Conference Keynote Speaker Petra Perner photo
Biography:

Petra Perner (IAPR Fellow) is the director of the IBaI. She received her Diploma degree in electrical engineering and her PhD degree in computer science for the work on “Data Reduction Methods for Industrial Robots with Direct Teach-in-Programing”. Her habilitation thesis was about “A Methodology for the Development of Knowledge-Based Image-Interpretation Systems". She has been the principal investigator of various national and international research projects. She received several research awards for her research work and has been awarded 3 business awards for her work on bringing intelligent image interpretation methods and data mining methods into business. Her research interest is image analysis and interpretation, machine learning, data mining, big data, machine learning, image mining and case-based reasoning.

Abstract:

Statistical methods play an important role in the description of image objects. The texture is one of the most important methods to describe the appearance of the objects such as cells, tissues, and so on. While the standard statistical texture descriptor is based on the co-occurrence matrix we propose a very flexible texture descriptor based on Random Sets. The texture descriptor can describe small objects as well as large objects in fine granularity. It also has an explanation capability that allows humans to understand the nature of the texture. Self-similarity is another important method to describe the appearance of the cells as well as the motion or kinetics of the cells. This descriptor summarizes a bunch of features in one feature and gives a semantic description of what is going on. Both novel statistical descriptors are flexible enough in order to describe the different things going on with an object and they are also very fast to calculate. They can form a standard tool for different descriptions of medical and biological objects and other objects under consideration.

Keynote Forum

Pierre Hansen

GERAD and HEC Montreal, Canada

Keynote: Algorithms for a minimum sum of squares clustering

Time : 10:05-10:40

Conference Series Biostatistics 2018 International Conference Keynote Speaker Pierre Hansen photo
Biography:

Pierre Hansen is a professor of Operations Research in the department of decision sciences of HEC Montreal. His research is focussed on combinatorial optimization, metaheuristics, and graph theory. With Nenad Mladenovic, he has developed the Variable Neighborhood Search metaheuristic, a general framework for building heuristics for a variety of combinatorial optimization and graph theory problems. He has received the EURO Gold Medal in 1986 as well as other prizes. He is a member of the Royal Society of Canada and the author or co-author of close to 400 scientific papers. His first paper on VNS was cited more than 3000 times.

Abstract:

Cluster analysis aims at solving the following very general problem: given a set of entities, find subsets (also called clusters) which are homogeneous and well separated. Homogeneity means that similar entities should belong to the same cluster. Separation means that dissimilar entities should belong to different clusters. This concept can be expressed mathematically in many ways. Hence, many heuristics and exact algorithms have been proposed. This problem is the main one in data mining with applications to a large variety of fields. Using the sum of squares of errors in clustering was already proposed by Steinhaus and co-workers in 1935. Since then, many heuristics and exact algorithms have been proposed for its resolution. Perhaps the most well-known most studied and most often applied in data mining is the minimum sum of squares clustering. Progress in the design of heuristics and, more recently, of exact algorithms has been substantial. All the algorithms are a branch and bound type (Ed-wards and Cavalli-Sforza 1965, Koontz, Narendra and Fukunaga 1975, Diehr 1985). A very important contribution is the K-means heuristic due to Llyod (first developed in 1957, published only 1982) and independently to Forgy (1965) and to MacQueen (1967). It works as follows: (1) choose an initial set of entities as cluster centers; (2) assign each entity to the closest center; (3) update the current centers by considering the centroids of the clusters; (4) return to step (2) as long as there is a modification in the centers. Despite a very substantial success (over 8525 citations of Lloyd’s paper, according to Google Scholar), there are some difficulties in its application: (i) the number of clusters is not known A PERIORI, (ii) there is a strong dependency of the results on the initial choice of cluster centers, (iii) some clusters may disappear during the process. Many proposals have been made to alleviate these difficulties. Recent progress includes the use of metaheuristics: Variable Neighborhood Search Hansen and Mladenovi´c 1997, 2001), Tabu Search (Glover 1989), and GRASP (Fea and Resende 1989, 1995). Exact methods for the minimum sum of squares problems have also been developed. They include column generation (du Merle et al. 1999), cutting plans (Peng and Xia 2005), dynamic programming (Van Os and Meul-man 2004, Jensen 1969), DC programming (Tao 2007), concave minimization (Bagirov 2008), Relation Linearization Technique (Sheraldi and Desai 2005). Merging branch and bound with an adaptation of a location problem, i.e.: Weber’s problem with maximum distance led to substantial progress in exact resolution: the size of the largest instance solved exactly was raised from 220 to 2392 entities.

Conference Series Biostatistics 2018 International Conference Keynote Speaker John E Kolassa photo
Biography:

John Kolassa completed his PhD from the University of Chicago. He worked as a postdoctoral fellow at the IBM TJ Watson Research Center, and the University of Rochester. He was an assistant professor at the University of Rochester and is presently a professor at Rutgers. His research is supported by the National Science Foundation, and previously by the National Institutes of Health. He is an editor of Stat and is an associate editor of the Journal of the American Statistical Association. He is a fellow of the American Statistical Association, the Institute for Mathematical Statistics, and the International Statistical Institute.

Abstract:

Proportional hazards regression models are very commonly used to model time to events in the presence of censoring. In some cases, particularly when sample sizes are moderate and covariates are discrete, maximum partial likelihood estimates are infinite. This lack of finite estimators complicates the use of profile methods for estimating and testing the remaining parameters. This presentation provides a method for inference in such cases. The method builds on similar techniques in use in logistic and multinomial regression and avoids arbitrary regularization.

Keynote Forum

Bing Li

Pennsylvania State University, USA

Keynote: Copula Gaussian graphical models for functional data

Time : 11:35-12:05

Conference Series Biostatistics 2018 International Conference Keynote Speaker Bing Li photo
Biography:

Bing Li has completed his PhD at the age of 32 years from the Department of Statistics, University of Chicago. He is a Professor of Statistics at the Department of Statistics of the Pennsylvania State University. He is an IMS fellow and an ASA fellow. He serves as an Associate Editor for the Annals of Statistics and Journal of the American Statistical Association.

Abstract:

We consider the problem of constructing statistical graphical models for functional data; that is, the observations on the vertices are random functions. These types of data are common in medical applications such as EEG and fMRI. Recently published functional graphical models rely on the assumption that the random functions are Hilbert-space-valued Gaussian random elements. We relax this assumption by introducing a copula Gaussian random elements Hilbert spaces, leading to what we call the Functional Copula Gaussian Graphical Model (FCGGM). This model removes the marginal Gaussian assumption but retains the simplicity of the Gaussian dependence structure, which is particularly attractive for large data. We develop four estimators, together with their implementation algorithms, for the FCGGM. We establish the consistency and the convergence rates of one of the estimators under different sets of enough conditions with varying strengths. We compare our FCGGM with the existing functional Gaussian graphical model by simulation, under both non-Gaussian and Gaussian graphical models, and apply our method to an EEG data set to construct brain networks.

Keynote Forum

Ergi Sener

IdeaField BV, Holland

Keynote: Disrupting retail analysis with Artificial Intelligence-powered advanced analytics

Time : 12:05-12:35

Conference Series Biostatistics 2018 International Conference Keynote Speaker Ergi Sener photo
Biography:

Ergi Sener, who is indicated as one of the 20 Turkish people to be followed in the field of technology, received a BS in Microelectronics Engineering in 2005 and double MS in Telecommunications & Management in 2007 from Sabanci University. He is pursuing a PHD degree in Technology Management & Innovation. He began his career as the co-founder and business development director of New Tone Technology Solutions in 2007 with the partnership of Sabancı University's Venture Program. Between 2009 and 2013, he worked as a CRM specialist at Garanti Payment Systems. In 2013, he joined MasterCard as a business development and innovation manager. He was also one of the co-founders and the managing director of Metamorfoz ICT, a new generation Fintech company and Bonbon Tech, the leader IoT focused new generation analytics company. He is currently acting as the Executive Board Member & CDO of a Dutch-based incubation center IdeaFiedl BV. During his career, along with many others he received “Global Telecoms Business Innovation Award" in 2014, "MasterCard Europe President's Award for Innovation" in 2013, "Payment System of the Year Award" by Payment Systems Magazine in 2012, and "Best Mobile Transaction Solution Award" by SIMagine in 2011.

Abstract:

In recent years, the increasing importance of "big data" has also led to "big" expectations. Particularly with the introduction of the concept of the Internet of Things (IoT), each object is linked to the internet and with the continuous increase in mobile and digital applications and services, data has been gathered at a surprising rate from various sources. When used and evaluated correctly, data has become a crucial competitive weapon, so in the technology world, data is frequently expressed as "new gold” or “new oil”. However, data does not represent a value by itself; "value" is formed as a result of processing data to solve a unique problem or fulfill a need. BonAir makes sense of big data by analyzing the data collected from customer visits, customer behaviors and customer profiles and uncovers the potential of big data and lead to provide competitive advantages for clients. With its unique technology, BonAir aims to perform the real-time behavior-based analysis. Based on their needs, customers can be directed at the right time to the right location with an ‘optional’ app integration as well. BonAir platform is being improved with the use of more advanced technology and better customer use cases. At the heart of the new platform lies the new hardware, which is recognized as an all-in-one device, that contains wi-fi sensors, beacon and several other sensor capabilities (such as heat, motion, pressure, etc) as well as camera integration. The camera will be used to count visitors with the best accuracy. Wi-fi sensors will provide all BonAir v1.0 capabilities including real-time heat maps, trend analysis, duration information, visit history, frequency and branch comparisons. Beacons will be used to send personalized notifications on iOS platform and in-door navigation use cases. Last, but not least, other sensors will be used to understand the effect of several factors and create predictive analytics. By getting insights into each different technology, BonAir + will be a major tool to be used in management decisions and business analytics. BonAir solution is currently the widest wi-fi based analytics network in several countries with more than 5.000 sensors deployed in the field. Some of the clients include Benetton, BMW, Volvo, Mercedes, Turkcell, Turk Telekom etc.

Keynote Forum

Nicolas Wesner

Mazars Actuariat, France

Keynote: Risk visualization: When actuarial science meets visual analytics

Time : 12:35-13:05

Conference Series Biostatistics 2018 International Conference Keynote Speaker Nicolas Wesner photo
Biography:

Nicolas Wesner has completed his PhD in Economics at the University of Paris X Nanterre and is an Associate Actuary since 2011. He is the head of the Pension Department of Mazars Actuariat an actuarial consulting firm that provides financial services for Banks, Insurance companies, and Pension Funds. He has published chapters or papers in reputed Journals on various subjects such as econometrics, quantitative finance, insurance and pension, and data mining

Abstract:

Actuarial Science aims at measuring and modeling financial risk with the use of statistical and mathematical models. Unlike traditional sciences like Physics or Biology, its main raw material, risk, is not directly observable and inherently subjective. It is perhaps for some of those reasons that the concept of risk visualization has not received much attention in the Literature. This paper argues that risk visualization can find many practical applications in finance and insurance, namely for communicating about risk (internally, toward the senior management or third parties), risk identification and management (through comprehensive, real-time, interactive Risk Dashboard) and informed decision making (multi-objective optimization). After reviewing the foundations of visual analytics and its appeal for risk analysis and risk management, applications of dimensionality reduction and clustering techniques to practical problems of risk monitoring in finance and insurance are presented.

Keynote Forum

Deming Chen

University of Illinois at Urbana-Champaign, USA

Keynote: Design productivity, compilation, and acceleration for data analytic applications

Time : 14:00-14:35

Conference Series Biostatistics 2018 International Conference Keynote Speaker Deming Chen photo
Biography:

Deming Chen obtained his BS in computer science from University of Pittsburgh, in 1995, MS and PhD in computer science from University of California in 2001 and 2005 respectively. He joined University of Illinois at Urbana-Champaign (UIUC) in 2005 and worked as professor till 2015. He is a technical committee member for a series of top conferences and symposia on EDA, FPGA, low-power design, and VLSI systems design. He is an associated editor for several leading IEEE and ACM journals. He received the NSF CAREER Award in 2008, the ACM SIGDA Outstanding New Faculty Award in 2010, and IBM Faculty Award in 2014 and 2015. He also received seven Best Paper Awards and the First Place Winner Award of DAC International Hardware Contest on IoT in 2017. He is included in the List of Teachers Ranked as Excellent in 2008 and 2017. He was involved in two startup companies previously, which were both acquired. In 2016, he co-founded a new startup, Inspirit IoT, Inc., for design and synthesis for machine learning applications targeting the IoT industry. He is the Donald Biggar Willett Faculty Scholar of College of Engineering of UIUC.

Abstract:

Deep Neural Networks (DNNs) are computation intensive. Without efficient hardware implementations of DNNs, many promising AI applications will not be practically realizable. In this talk, we will analyze several challenges facing the AI community for mapping DNNs to hardware accelerators. Especially, we will evaluate FPGA's potential role in accelerating DNNs for both the cloud and edge devices. Although FPGAs can provide desirable customized hardware solutions, they are difficult to program and optimize. We will present a series of effective design techniques for implementing DNNs on FPGAs with high performance and energy efficiency. These include automated hardware/software co-design, the use of configurable DNN IPs, resource allocation across DNN layers, smart pipeline scheduling, Winograd and FFT techniques, and DNN reduction and re-training. We showcase several design solutions including Long-term Recurrent Convolution Network (LRCN) for video captioning, Inception module (GoogleNet) for face recognition, as well as Long Short-Term Memory (LSTM) for sound recognition. We will also present some of our recent work on developing new DNN models and data structures for achieving higher accuracy for several interesting applications such as crowd counting, genomics, and music synthesis.