899 resultados para Integrated circuits Very large scale integration Design and construction.
Resumo:
As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.
Resumo:
In this paper, the author explores the barriers that students of English as a Second Language (ESL) face in coming fully literate in English and fully integrated into American society. The barriers cited include inadequate training of reading specialists to work with ESL students, turf wars between reading specialists and ESL teachers, inadequate preparation of students for high school and higher education, as well as a lack of academic research on developing reading skills in ESL students. Strategies for overcoming these barriers include improvement in teacher training, understanding of the student population and students’ first language (L1), and promoting success in literacy.
Resumo:
Colombia's increasingly effective efforts to mitigate the power of the FARC and other illegitimately armed groups in the country can offer important lessons for the Peruvian government as it strives to prevent a resurgence of Sendero Luminoso and other illegal non-state actors. Both countries share certain particular challenges: deep economic, social, and in the case of Peru ethnic divisions, the presence of and/or the effects of violent insurgencies, a large-scale narcotics production and trafficking, and a history of weak state presence in large tracts of isolated and scarcely-populated areas. Important differences exist, however in the nature of the insurgencies in the two countries, the government response to them and the nature of government and society that affects the applicability of Colombia's experience to Peru. The security threat to Panama from drug trafficking and Colombian insurgents --often a linked phenomenon-- are in many ways different from the drug/insurgent factor in Colombia itself and in Peru, although there are similar variables. Unlike the Colombian and Peruvian cases, the security threat in Panama is not directed against the state, there are no domestic elements seeking to overthrow the government -- as the case of the FARC and Sendero Luminoso, security problems have not spilled over from rural to urban areas in Panama, and there is no ideological component at play in driving the threat. Nor is drug cultivation a major factor in Panama as it is in Colombia and Peru. The key variable that is shared among all three cases is the threat of extra-state actors controlling remote rural areas or small towns where state presence is minimal. The central lesson learned from Colombia is the need to define and then address the key problem of a "sovereignity gap," lack of legitimate state presence in many part of the country. Colombia's success in broadening the presence of the national government between 2002 and the presence is owed to many factors, including an effective national strategy, improvements in the armed forces and police, political will on the part of government for a sustained effort, citizen buy-in to the national strategy, including the resolve of the elite to pay more in taxes to bring change about, and the adoption of a sequenced approach to consolidated development in conflicted areas. Control of territory and effective state presence improved citizen security, strengthened confidence in democracy and the legitimate state, promoted economic development, and helped mitigate the effect of illegal drugs. Peru can benefit from the Colombian experience especially in terms of the importance of legitimate state authority, improved institutions, gaining the support of local citizens, and furthering development to wean communities away from drugs. State coordinated "integration" efforts in Peru as practiced in Colombia have the potential for success if properly calibrated to Peruvian reality, coordinated within government, and provided with sufficient resources. Peru's traditionally weak political institutions and lack of public confidence in the state in many areas of the country must be overcome if this effort is to be successful.
Resumo:
Bacteria are known to release a large variety of small molecules known as autoinducers (AI) which effect quorum sensing (QS) initiation. The interruption of QS effects bacterial communication, growth and virulence. ^ Three novel classes of S-ribosylhomocysteine (SRH) analogues as potential inhibitors of S-ribosylhomocysteinase (LuxS enzyme) and AI-2 modulators of QS were developed. The synthesis of 2-deoxy-2-bromo-SRH analogues was attempted by coupling of the corresponding 2-bromo-2-deoxypentafuranosyl precursors with the homocysteinate anion. The displacement of the bromide from C2 rather than the expected substitution of the mesylate from C5 was observed. The synthesis of 4-C-alkyl/aryl-S-ribosylhomocysteine analogues involved the following steps: (i) conversion of the D-ribose to the ribitol-4-ulose; (ii) diastereoselective addition of various alkyl or aryl or vinyl Grignard reagents to 4-ketone intermediate; (iii) oxidation of the primary hydroxyl group at C1 followed by the intramolecular ring closure to the corresponding 4-C-alkyl/aryl-substituted ribono-1,4-lactones; (iv) displacement of the activated 5-hydroxyl group with the protected homocysteinate. Treatment of the 4-C-alkyl/aryl-substituted SRH analogues with lithium triethylborohydride effected reduction of the ribonolactone to the ribose (hemiacetal) and subsequent global deprotection with trifluoroacetic acid provided 4-C-alkyl/aryl-SRHs. ^ The 4-[thia]-SRH were prepared from the 1-deoxy-4-thioribose through the coupling of the &agr;-fluoro thioethers (thioribosyl fluorides) with homocysteinate anion. The 4-[thia]-SRH analogues showed concentration dependent effect on the growth on las (50% inhibitory effect at 200 µg/mL). The most active was 1-deoxy-4-[thia]-SRH analogue with sufur atom in the ring oxidized to sulfoxide decreasing las gene activity to approximately 35% without affecting rhl gene. Neither of the tested compounds had effect on bioluminescence nor on total growth of V. harveyi, but had however slight inhibition of the QS.^
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Long-span bridges are flexible and therefore are sensitive to wind induced effects. One way to improve the stability of long span bridges against flutter is to use cross-sections that involve twin side-by-side decks. However, this can amplify responses due to vortex induced oscillations. Wind tunnel testing is a well-established practice to evaluate the stability of bridges against wind loads. In order to study the response of the prototype in laboratory, dynamic similarity requirements should be satisfied. One of the parameters that is normally violated in wind tunnel testing is Reynolds number. In this dissertation, the effects of Reynolds number on the aerodynamics of a double deck bridge were evaluated by measuring fluctuating forces on a motionless sectional model of a bridge at different wind speeds representing different Reynolds regimes. Also, the efficacy of vortex mitigation devices was evaluated at different Reynolds number regimes. One other parameter that is frequently ignored in wind tunnel studies is the correct simulation of turbulence characteristics. Due to the difficulties in simulating flow with large turbulence length scale on a sectional model, wind tunnel tests are often performed in smooth flow as a conservative approach. The validity of simplifying assumptions in calculation of buffeting loads, as the direct impact of turbulence, needs to be verified for twin deck bridges. The effects of turbulence characteristics were investigated by testing sectional models of a twin deck bridge under two different turbulent flow conditions. Not only the flow properties play an important role on the aerodynamic response of the bridge, but also the geometry of the cross section shape is expected to have significant effects. In this dissertation, the effects of deck details, such as width of the gap between the twin decks, and traffic barriers on the aerodynamic characteristics of a twin deck bridge were investigated, particularly on the vortex shedding forces with the aim of clarifying how these shape details can alter the wind induced responses. Finally, a summary of the issues that are involved in designing a dynamic test rig for high Reynolds number tests is given, using the studied cross section as an example.
Resumo:
Construction projects are complex endeavors that require the involvement of different professional disciplines in order to meet various project objectives that are often conflicting. The level of complexity and the multi-objective nature of construction projects lend themselves to collaborative design and construction such as integrated project delivery (IPD), in which relevant disciplines work together during project conception, design and construction. Traditionally, the main objectives of construction projects have been to build in the least amount of time with the lowest cost possible, thus the inherent and well-established relationship between cost and time has been the focus of many studies. The importance of being able to effectively model relationships among multiple objectives in building construction has been emphasized in a wide range of research. In general, the trade-off relationship between time and cost is well understood and there is ample research on the subject. However, despite sustainable building designs, relationships between time and environmental impact, as well as cost and environmental impact, have not been fully investigated. The objectives of this research were mainly to analyze and identify relationships of time, cost, and environmental impact, in terms of CO2 emissions, at different levels of a building: material level, component level, and building level, at the pre-use phase, including manufacturing and construction, and the relationships of life cycle cost and life cycle CO2 emissions at the usage phase. Additionally, this research aimed to develop a robust simulation-based multi-objective decision-support tool, called SimulEICon, which took construction data uncertainty into account, and was capable of incorporating life cycle assessment information to the decision-making process. The findings of this research supported the trade-off relationship between time and cost at different building levels. Moreover, the time and CO2 emissions relationship presented trade-off behavior at the pre-use phase. The results of the relationship between cost and CO2 emissions were interestingly proportional at the pre-use phase. The same pattern continually presented after the construction to the usage phase. Understanding the relationships between those objectives is a key in successfully planning and designing environmentally sustainable construction projects.
Resumo:
My thesis examines fine-scale habitat use and movement patterns of age 1 Greenland cod (Gadus macrocephalus ogac) tracked using acoustic telemetry. Recent advances in tracking technologies such as GPS and acoustic telemetry have led to increasingly large and detailed datasets that present new opportunities for researchers to address fine-scale ecological questions regarding animal movement and spatial distribution. There is a growing demand for home range models that will not only work with massive quantities of autocorrelated data, but that can also exploit the added detail inherent in these high-resolution datasets. Most published home range studies use radio-telemetry or satellite data from terrestrial mammals or avian species, and most studies that evaluate the relative performance of home range models use simulated data. In Chapter 2, I used actual field-collected data from age-1 Greenland cod tracked with acoustic telemetry to evaluate the accuracy and precision of six home range models: minimum convex polygons, kernel densities with plug-in bandwidth selection and the reference bandwidth, adaptive local convex hulls, Brownian bridges, and dynamic Brownian bridges. I then applied the most appropriate model to two years (2010-2012) of tracking data collected from 82 tagged Greenland cod tracked in Newman Sound, Newfoundland, Canada, to determine diel and seasonal differences in habitat use and movement patterns (Chapter 3). Little is known of juvenile cod ecology, so resolving these relationships will provide valuable insight into activity patterns, habitat use, and predator-prey dynamics, while filling a knowledge gap regarding the use of space by age 1 Greenland cod in a coastal nursery habitat. By doing so, my thesis demonstrates an appropriate technique for modelling the spatial use of fish from acoustic telemetry data that can be applied to high-resolution, high-frequency tracking datasets collected from mobile organisms in any environment.
Resumo:
The social media classification problems draw more and more attention in the past few years. With the rapid development of Internet and the popularity of computers, there is astronomical amount of information in the social network (social media platforms). The datasets are generally large scale and are often corrupted by noise. The presence of noise in training set has strong impact on the performance of supervised learning (classification) techniques. A budget-driven One-class SVM approach is presented in this thesis that is suitable for large scale social media data classification. Our approach is based on an existing online One-class SVM learning algorithm, referred as STOCS (Self-Tuning One-Class SVM) algorithm. To justify our choice, we first analyze the noise-resilient ability of STOCS using synthetic data. The experiments suggest that STOCS is more robust against label noise than several other existing approaches. Next, to handle big data classification problem for social media data, we introduce several budget driven features, which allow the algorithm to be trained within limited time and under limited memory requirement. Besides, the resulting algorithm can be easily adapted to changes in dynamic data with minimal computational cost. Compared with two state-of-the-art approaches, Lib-Linear and kNN, our approach is shown to be competitive with lower requirements of memory and time.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Topographic variation, the spatial variation in elevation and terrain features, underpins a myriad of patterns and processes in geography and ecology and is key to understanding the variation of life on the planet. The characterization of this variation is scale-dependent, i.e. it varies with the distance over which features are assessed and with the spatial grain (grid cell resolution) of analysis. A fully standardized and global multivariate product of different terrain features has the potential to support many large-scale basic research and analytical applications, however to date, such technique is unavailable. Here we used the digital elevation model products of global 250 m GMTED and near-global 90 m SRTM to derive a suite of topographic variables: elevation, slope, aspect, eastness, northness, roughness, terrain roughness index, topographic position index, vector ruggedness measure, profile and tangential curvature, and 10 geomorphological landform classes. We aggregated each variable to 1, 5, 10, 50 and 100 km spatial grains using several aggregation approaches (median, average, minimum, maximum, standard deviation, percent cover, count, majority, Shannon Index, entropy, uniformity). While a global cross-correlation underlines the high similarity of many variables, a more detailed view in four mountain regions reveals local differences, as well as scale variations in the aggregated variables at different spatial grains. All newly-developed variables are available for download at http://www.earthenv.org and can serve as a basis for standardized hydrological, environmental and biodiversity modeling at a global extent.
Resumo:
The development of cost-effective and reliable methods for the synthesis and separation of asymmetric compounds is paramount in helping to meet society’s ever-growing demand for chiral small molecules. Of these methods, chiral heterogeneous supports are particularly appealing as they allow for the reuse of the chiral source. One such support, based on the synergy between chiral organic units and structurally stable inorganic silicon scaffolds are periodic mesoporous organosilicas (PMOs). In the work described herein, I examine some of the factors governing the transmission of chirality between chiral dopants and prochiral bulk phases in chiral PMO materials. In particular, the exploration of 1,1’-binaphthalene-bridged chiral dopants with a focus on the point of attachment into the materials. Moreover, the effects of ordering in the materials are examined and reveal that chirality transfer is more facile in materials with molecular-scale order then those containing amorphous walls. Secondly, the issues surrounding the synthesis and purification of aryl-triethoxysilanes as siloxane precursors are addressed. Both the introduction of a two-carbon linker and the direct attachment of allyl and mixed allyldiethoxysilane species are explored. This work demonstrates that allyldiethoxysilanes are ideal, in that they are stable enough to permit facile synthesis, while still being able to hydrolyze completely to produce well-ordered materials. Lastly, the production of new bulk phases for chiral PMO materials is examined by introducing new prochiral nitrogen-containing siloxane precursors. Biphenyldiamine and bipyridine-bridged siloxane precursors are readily synthesized on reasonable scales. Their use as the bulk siloxane precursor in the production of PMO materials however, is precluded by insufficient gelation and additional siloxane precursors are necessary for the production of ordered materials. In addition to the research detailed above that forms the body of this thesis, two short works are appended. The first details the production of polythiophene assemblies mediated through coordination nanospaces, while the second explores the production of N-heterocyclic carbene functionalized gold nanoparticles through ligand exchange.
Resumo:
This thesis presents details of the design and development of novel tools and instruments for scanning tunneling microscopy (STM), and may be considered as a repository for several years' worth of development work. The author presents design goals and implementations for two microscopes. First, a novel Pan-type STM was built that could be operated in an ambient environment as a liquid-phase STM. Unique features of this microscope include a unibody frame, for increased microscope rigidity, a novel slider component with large Z-range, a unique wiring scheme and damping mechanism, and a removable liquid cell. The microscope exhibits a high level of mechanical isolation at the tunnel junction, and operates excellently as an ambient tool. Experiments in liquid are on-going. Simultaneously, the author worked on designs for a novel low temperature, ultra-high vacuum (LT-UHV) instrument, and these are presented as well. A novel stick-slip vertical coarse approach motor was designed and built. To gauge the performance of the motor, an in situ motion sensing apparatus was implemented, which could measure the step size of the motor to high precision. A new driving circuit for stick-slip inertial motors is also presented, that o ffers improved performance over our previous driving circuit, at a fraction of the cost. The circuit was shown to increase step size performance by 25%. Finally, a horizontal sample stage was implemented in this microscope. The build of this UHV instrument is currently being fi nalized. In conjunction with the above design projects, the author was involved in a collaborative project characterizing N-heterocyclic carbene (NHC) self-assembled monolayers (SAMs) on Au(111) films. STM was used to characterize Au substrate quality, for both commercial substrates and those manufactured via a unique atomic layer deposition (ALD) process by collaborators. Ambient and UHV STM was then also used to characterize the NHC/Au(111) films themselves, and several key properties of these films are discussed. During this study, the author discovered an unexpected surface contaminant, and details of this are also presented. Finally, two models are presented for the nature of the NHC-Au(111) surface interaction based on the observed film properties, and some preliminary theoretical work by collaborators is presented.
Resumo:
The focus of this thesis is to explore and quantify the response of large-scale solid mass transfer events on satellite-based gravity observations. The gravity signature of large-scale solid mass transfers has not been deeply explored yet; mainly due to the lack of significant events during dedicated satellite gravity missions‘ lifespans. In light of the next generation of gravity missions, the feasibility of employing satellite gravity observations to detect submarine and surface mass transfers is of importance for geoscience (improves the understanding of geodynamic processes) and for geodesy (improves the understanding of the dynamic gravity field). The aim of this thesis is twofold and focuses on assessing the feasibility of using satellite gravity observations for detecting large-scale solid mass transfers and on modeling the impact on the gravity field caused by these events. A methodology that employs 3D forward modeling simulations and 2D wavelet multiresolution analysis is suggested to estimate the impact of solid mass transfers on satellite gravity observations. The gravity signature of various submarine and subaerial events that occurred in the past was estimated. Case studies were conducted to assess the sensitivity and resolvability required in order to observe gravity differences caused by solid mass transfers. Simulation studies were also employed in order to assess the expected contribution of the Next Generation of Gravity Missions for this application.
Resumo:
Since the 1950s the global consumption of natural resources has skyrocketed, both in magnitude and in the range of resources used. Closely coupled with emissions of greenhouse gases, land consumption, pollution of environmental media, and degradation of ecosystems, as well as with economic development, increasing resource use is a key issue to be addressed in order to keep the planet Earth in a safe and just operating space. This requires thinking about absolute reductions in resource use and associated environmental impacts, and, when put in the context of current re-focusing on economic growth at the European level, absolute decoupling, i.e., maintaining economic development while absolutely reducing resource use and associated environmental impacts. Changing behavioural, institutional and organisational structures that lock-in unsustainable resource use is, thus, a formidable challenge as existing world views, social practices, infrastructures, as well as power structures, make initiating change difficult. Hence, policy mixes are needed that will target different drivers in a systematic way. When designing policy mixes for decoupling, the effect of individual instruments on other drivers and on other instruments in a mix should be considered and potential negative effects be mitigated. This requires smart and time-dynamic policy packaging. This Special Issue investigates the following research questions: What is decoupling and how does it relate to resource efficiency and environmental policy? How can we develop and realize policy mixes for decoupling economic development from resource use and associated environmental impacts? And how can we do this in a systemic way, so that all relevant dimensions and linkages—including across economic and social issues, such as production, consumption, transport, growth and wellbeing—are taken into account? In addressing these questions, the overarching goals of this Special Issue are to: address the challenges related to more sustainable resource-use; contribute to the development of successful policy tools and practices for sustainable development and resource efficiency (particularly through the exploration of socio-economic, scientific, and integrated aspects of sustainable development); and inform policy debates and policy-making. The Special Issue draws on findings from the EU and other countries to offer lessons of international relevance for policy mixes for more sustainable resource-use, with findings of interest to policy makers in central and local government and NGOs, decision makers in business, academics, researchers, and scientists.