7 resultados para new methods
em Digital Commons at Florida International University
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
Beginning in the era of the Spanish conquest and taking the reader right up to the present day, this book focuses on how the landscape of Cuba has changed and evolved into the environment we see today. It illustrates the range of factors – economic, political and cultural – that have determined Cuba’s physical geography, and explores the shifting conservation measures which have been instituted in response to new methods in agriculture and land management. The text uses historical documents, fieldwork, Geographic Information System (GIS) data and remotely-sensed satellite imagery to detail Cuba’s extensive land-use history as well as its potential future. The author goes further to analyze the manner, speed and methods of landscape change, and examines the historical context and governing agendas that have had an impact on the relationship between Cuba’s inhabitants and their island. Gebelein also assesses the key role played by agricultural production in the framework of international trade required to sustain Cuba’s people and its economy. The book concludes with a review of current efforts by Cuban and other research scientists, as well as private investors, conservation managers and university professors who are involved in shaping Cuba’s evolving landscape and managing it during the country’s possible transition to a more politically diverse, enfranchised and open polity.
Resumo:
The primary goal of this dissertation is the study of patterns of viral evolution inferred from serially-sampled sequence data, i.e., sequence data obtained from strains isolated at consecutive time points from a single patient or host. RNA viral populations have an extremely high genetic variability, largely due to their astronomical population sizes within host systems, high replication rate, and short generation time. It is this aspect of their evolution that demands special attention and a different approach when studying the evolutionary relationships of serially-sampled sequence data. New methods that analyze serially-sampled data were developed shortly after a groundbreaking HIV-1 study of several patients from which viruses were isolated at recurring intervals over a period of 10 or more years. These methods assume a tree-like evolutionary model, while many RNA viruses have the capacity to exchange genetic material with one another using a process called recombination. ^ A genealogy involving recombination is best described by a network structure. A more general approach was implemented in a new computational tool, Sliding MinPD, one that is mindful of the sampling times of the input sequences and that reconstructs the viral evolutionary relationships in the form of a network structure with implicit representations of recombination events. The underlying network organization reveals unique patterns of viral evolution and could help explain the emergence of disease-associated mutants and drug-resistant strains, with implications for patient prognosis and treatment strategies. In order to comprehensively test the developed methods and to carry out comparison studies with other methods, synthetic data sets are critical. Therefore, appropriate sequence generators were also developed to simulate the evolution of serially-sampled recombinant viruses, new and more through evaluation criteria for recombination detection methods were established, and three major comparison studies were performed. The newly developed tools were also applied to "real" HIV-1 sequence data and it was shown that the results represented within an evolutionary network structure can be interpreted in biologically meaningful ways. ^
Resumo:
What is the architecture of transience? What role does architecture play in the impermanent context of the nomad? What form does architecture take when our perception of shelter transforms from fixed and static to flexible and transportable? How does architecture react to the challenges of mobility and change? Traditional building forms speak of stability as an important aspect of architecture. Does portability imply a different building form? During the1950s Buckminister Fuller introduced the idea of mobile, portable structures. In the 1960s Archigrams' examples of architectural nomadism made the mobile home an accepted feature of our contemporary landscape. Currently, new materials and new methods of assembly and transportation open opportunities for rethinking portable architecture. For this thesis, a shelter was developed which provides inhabitable space and portability. The shelter was designed to be easily carried as a backpack. With minimum human effort, the structure is assembled and erected in a few minutes. Although this portable shelter needs to be maneuvered, folded and tucked away for transportation, it does meet the demands of nomadic behavior which emphasizes comfort and portability.
Resumo:
The general method for determining organomercurials in environmental and biological samples is gas chromatography with electron capture detection (GC-ECD). However, tedious sample work up protocols and poor chromatographic response show the need for the development of new methods. Here, Atomic Fluorescence-based methods are described, free from these deficiencies. The organomercurials in soil, sediment and tissue samples are first released from the matrices with acidic KBr and cupric ions and extracted into dichloromethane. The initial extracts are subjected to thiosulfate clean up and the organomercury species are isolated as their chloride derivatives by cupric chloride and subsequent extraction into a small volume of dichloromethane. In water samples the organomercurials are pre-concentrated using a sulfhydryl cotton fiber adsorbent, followed by elution with acidic KBr and CuSO 4 and extraction into dichloromethane. Analysis of the organomercurials is accomplished by capillary column chromatography with atomic fluorescence detection.