16 resultados para asynchronous circuits and systems
em Digital Commons at Florida International University
Resumo:
The local area network (LAN) interconnecting computer systems and soft- ware can make a significant contribution to the hospitality industry. The author discusses the advantages and disadvantages of such systems.
Resumo:
Increased device density, switching speeds of integrated circuits and decrease in package size is placing new demands for high power thermal-management. The convectional method of forced air cooling with passive heat sink can handle heat fluxes up-to 3-5W/cm2; however current microprocessors are operating at levels of 100W/cm2, This demands the usage of novel thermal-management systems. In this work, water-cooling systems with active heat sink are embedded in the substrate. The research involved fabricating LTCC substrates of various configurations - an open-duct substrate, the second with thermal vias and the third with thermal vias and free-standing metal columns and metal foil. Thermal testing was performed experimentally and these results are compared with CFD results. An overall thermal resistance for the base substrate is demonstrated to be 3.4oC/W-cm2. Addition of thermal vias reduces the effective resistance of the system by 7times and further addition of free standing columns reduced it by 20times.
Resumo:
This dissertation reports the results of a study that examined differences between genders in a sample of adolescents from a residential substance abuse treatment facility. The sample included 72 males and 65 females, ages 12 through 17. The data were archival, having been originally collected for a study of elopement from treatment. The current study included 23 variables. The variables were from multiple dimensions, including socioeconomic, legal, school, family, substance abuse, psychological, social support, and treatment histories. Collectively, they provided information about problem behaviors and psychosocial problems that are correlates of adolescent substance abuse. The study hypothesized that these problem behaviors and psychosocial problems exist in different patterns and combinations between genders.^ Further, it expected that these patterns and combinations would constitute profiles important for treatment. K-means cluster analysis identified differential profiles between genders in all three areas: problem behaviors, psychosocial problems, and treatment profiles. In the dimension of problem behaviors, the predominantly female group was characterized as suicidal and destructive, while the predominantly male group was identified as aggressive and low achieving. In the dimension of psychosocial problems, the predominantly female group was characterized as abused depressives, while the male group was identified as asocial, low problem severity. A third group, neither predominantly female or male, was characterized as social, high problem severity. When these dimensions were combined to form treatment profiles, the predominantly female group was characterized as abused, self-harmful, and social, and the male group was identified as aggressive, destructive, low achieving, and asocial. Finally, logistic regression and discriminant analysis were used to determine whether a history of sexual and physical abuse impacted problem behavior differentially between genders. Sexual abuse had a substantially greater influence in producing self-mutilating and suicidal behavior among females than among males. Additionally, a model including sexual abuse, physical abuse, low family support, and low support from friends showed a moderate capacity to predict unusual harmful behavior (fire-starting and cruelty to animals) among males. Implications for social work practice, social work research, and systems science are discussed. ^
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
Small devices, in the range of nanometers, are playing a major role in today's technology. The field of nanotechnology is concerned with materials and systems whose structures and components exhibit novel and significantly improved physical, chemical and biological properties, phenomena and processes due to their small nanoscale size. Researches more and more are finding that structural features in the range of about 1 to 100 nanometers behave quite differently than isolated molecules (1 nanometer) or bulk materials. For comparison, a 10 nanometer structure is 1000 times smaller than the diameter of a human hair. The virtues of working in the nanodomain are increasingly recognized by the scientific community and discussed in the popular press. The use of such devices is expected to revolutionize our industries and lives. ^ This work mainly focuses on the fabrication, characterization and discovery of new nanostructured thin films. This research consists of the design of a new high-deposition rate nanoparticle machine for depositing nanostructured films from beams of nanoparticles and investigation film's unique optical and physical properties.^ A high-deposition rate nanoparticle machine was designed, built and successfully tested. Different nanostructured thin films were deposited from Copper, Gold, Iron and Zirconium targets with the grain size of between 1 to 20 nm under different conditions. Transmission Electron Microscopy (TEM), Atomic Force Microscopy (AFM), and x-ray diffraction (XRD) confirmed nanoscale grain size structures of deposited films. The optical properties of the nanostructured films deposited from copper, Iron and Zirconium targets were significantly different from optical properties of bulk and thin films. Zr, Cu and Fe films were transparent. Gold films revealed an epitaxial contact with the silicon substrate with interesting crystal structures. ^ The new high-deposition rate nanoparticle machine was able to deposit new nanostructured films with different properties from bulk and thin films reported in the literatures. ^
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
To carry out their specific roles in the cell, genes and gene products often work together in groups, forming many relationships among themselves and with other molecules. Such relationships include physical protein-protein interaction relationships, regulatory relationships, metabolic relationships, genetic relationships, and much more. With advances in science and technology, some high throughput technologies have been developed to simultaneously detect tens of thousands of pairwise protein-protein interactions and protein-DNA interactions. However, the data generated by high throughput methods are prone to noise. Furthermore, the technology itself has its limitations, and cannot detect all kinds of relationships between genes and their products. Thus there is a pressing need to investigate all kinds of relationships and their roles in a living system using bioinformatic approaches, and is a central challenge in Computational Biology and Systems Biology. This dissertation focuses on exploring relationships between genes and gene products using bioinformatic approaches. Specifically, we consider problems related to regulatory relationships, protein-protein interactions, and semantic relationships between genes. A regulatory element is an important pattern or "signal", often located in the promoter of a gene, which is used in the process of turning a gene "on" or "off". Predicting regulatory elements is a key step in exploring the regulatory relationships between genes and gene products. In this dissertation, we consider the problem of improving the prediction of regulatory elements by using comparative genomics data. With regard to protein-protein interactions, we have developed bioinformatics techniques to estimate support for the data on these interactions. While protein-protein interactions and regulatory relationships can be detected by high throughput biological techniques, there is another type of relationship called semantic relationship that cannot be detected by a single technique, but can be inferred using multiple sources of biological data. The contributions of this thesis involved the development and application of a set of bioinformatic approaches that address the challenges mentioned above. These included (i) an EM-based algorithm that improves the prediction of regulatory elements using comparative genomics data, (ii) an approach for estimating the support of protein-protein interaction data, with application to functional annotation of genes, (iii) a novel method for inferring functional network of genes, and (iv) techniques for clustering genes using multi-source data.
Resumo:
With the explosive growth of the volume and complexity of document data (e.g., news, blogs, web pages), it has become a necessity to semantically understand documents and deliver meaningful information to users. Areas dealing with these problems are crossing data mining, information retrieval, and machine learning. For example, document clustering and summarization are two fundamental techniques for understanding document data and have attracted much attention in recent years. Given a collection of documents, document clustering aims to partition them into different groups to provide efficient document browsing and navigation mechanisms. One unrevealed area in document clustering is that how to generate meaningful interpretation for the each document cluster resulted from the clustering process. Document summarization is another effective technique for document understanding, which generates a summary by selecting sentences that deliver the major or topic-relevant information in the original documents. How to improve the automatic summarization performance and apply it to newly emerging problems are two valuable research directions. To assist people to capture the semantics of documents effectively and efficiently, the dissertation focuses on developing effective data mining and machine learning algorithms and systems for (1) integrating document clustering and summarization to obtain meaningful document clusters with summarized interpretation, (2) improving document summarization performance and building document understanding systems to solve real-world applications, and (3) summarizing the differences and evolution of multiple document sources.
Resumo:
Around the world borders are militarized, states are stepping up repressive anti-immigrant controls, and native publics are turning immigrants into scapegoats for the spiraling crisis of global capitalism. The massive displacement and primitive accumulation unleashed by free trade agreements and neo-liberal policies, as well as state and “private” violence has resulted in a virtually inexhaustible immigrant labor reserve for the global economy. State controls over immigration and immigrant labor have several functions for the system: 1) state repression and criminalization of undocumented immigration make immigrants vulnerable and deportable and therefore subject to conditions of super-exploitation, super-control and hyper-surveillance; 2) anti-immigrant repressive apparatuses are themselves ever more important sources of accumulation, ranging from private for-profit immigrant detention centers, to the militarization of borders, and the purchase by states of military hardware and systems of surveillance. Immigrant labor is extremely profitable for the transnational corporate economy; 3) the anti-immigrant policies associated with repressive state apparatuses help turn attention away from the crisis of global capitalism among more privileged sectors of the working class and convert immigrant workers into scapegoats for the crisis, thus deflecting attention from the root causes of the crisis and undermining working class unity. This article focuses on structural and historical underpinnings of the phenomenon of immigrant labor in the new global capitalist system and on how the rise of a globally integrated production and financial system, a transnational capitalist class, and transnational state apparatuses, have led to a reorganization of the world market in labor, including deeper reliance on a rapidly expanding reserve army of immigrant labor and a vicious new anti-immigrant politics. It looks at the United States as an illustration of the larger worldwide situation with regard to immigration and immigrant justice. Finally, it explores the rise of an immigrant justice movement around the world, observes the leading role that immigrant workers often play in worker’s struggles and that a mass immigrant rights movement is at the cutting edge of the struggle against transnational corporate exploitation. We call for replacing the whole concept of national citizenship with that of global citizenship as the only rallying cry that can assure justice and equality for all.
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
A nuclear waste stream is the complete flow of waste material from origin to treatment facility to final disposal. The objective of this study was to design and develop a Geographic Information Systems (GIS) module using Google Application Programming Interface (API) for better visualization of nuclear waste streams that will identify and display various nuclear waste stream parameters. A proper display of parameters would enable managers at Department of Energy waste sites to visualize information for proper planning of waste transport. The study also developed an algorithm using quadratic Bézier curve to make the map more understandable and usable. Microsoft Visual Studio 2012 and Microsoft SQL Server 2012 were used for the implementation of the project. The study has shown that the combination of several technologies can successfully provide dynamic mapping functionality. Future work should explore various Google Maps API functionalities to further enhance the visualization of nuclear waste streams.
Resumo:
Natural disasters in Argentina and Chile played a significant role in the state-formation and nation-building process (1822-1939). This dissertation explores state and society responses to earthquakes by studying public and private relief efforts reconstruction plans, crime and disorder, religious interpretations of catastrophes, national and transnational cultures of disaster, science and technology, and popular politics. Although Argentina and Chile share a political border and geological boundary, the two countries provide contrasting examples of state formation. Most disaster relief and reconstruction efforts emanated from the centralized Chilean state in Santiago. In Argentina, provincial officials made the majority of decisions in a catastrophe’s aftermath. Patriotic citizens raised money and collected clothing for survivors that helped to weave divergent regions together into a nation. The shared experience of earthquakes in all regions of Chile created a national disaster culture. Similarly, common disaster experiences, reciprocal relief efforts, and aid commissions linked Chileans with Western Argentine societies and generated a transnational disaster culture. Political leaders viewed reconstruction as opportunities to implement their visions for the nation on the urban landscape. These rebuilding projects threatened existing social hierarchies and often failed to come to fruition. Rebuilding brought new technologies from Europe to the Southern Cone. New building materials and systems, however, had to be adapted to the South American economic and natural environment. In a catastrophe’s aftermath, newspapers projected images of disorder and the authorities feared lawlessness and social unrest. Judicial and criminal records, however, show that crime often decreased after a disaster. Finally, nineteenth-century earthquakes heightened antagonism and conflict between the Catholic Church and the state. Conservative clergy asserted that disasters were divine punishments for the state’s anti-clerical measures and later railed against scientific explanations of earthquakes.