863 resultados para critical path methods
Resumo:
The tragic events of September 11th ushered a new era of unprecedented challenges. Our nation has to be protected from the alarming threats of adversaries. These threats exploit the nation's critical infrastructures affecting all sectors of the economy. There is the need for pervasive monitoring and decentralized control of the nation's critical infrastructures. The communications needs of monitoring and control of critical infrastructures was traditionally catered for by wired communication systems. These technologies ensured high reliability and bandwidth but are however very expensive, inflexible and do not support mobility and pervasive monitoring. The communication protocols are Ethernet-based that used contention access protocols which results in high unsuccessful transmission and delay. An emerging class of wireless networks, named embedded wireless sensor and actuator networks has potential benefits for real-time monitoring and control of critical infrastructures. The use of embedded wireless networks for monitoring and control of critical infrastructures requires secure, reliable and timely exchange of information among controllers, distributed sensors and actuators. The exchange of information is over shared wireless media. However, wireless media is highly unpredictable due to path loss, shadow fading and ambient noise. Monitoring and control applications have stringent requirements on reliability, delay and security. The primary issue addressed in this dissertation is the impact of wireless media in harsh industrial environment on the reliable and timely delivery of critical data. In the first part of the dissertation, a combined networking and information theoretic approach was adopted to determine the transmit power required to maintain a minimum wireless channel capacity for reliable data transmission. The second part described a channel-aware scheduling scheme that ensured efficient utilization of the wireless link and guaranteed delay. Various analytical evaluations and simulations are used to evaluate and validate the feasibility of the methodologies and demonstrate that the protocols achieved reliable and real-time data delivery in wireless industrial networks.
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. ^ The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as (1) closure or connectedness within the group, (2) bridging ties which extend outside of the group, and (3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. ^ The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. ^ Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software. ^
Resumo:
Nanocrystalline and bulk samples of “Fe”-doped CuO were prepared by coprecipitation and ceramic methods. Structural and compositional analyses were performed using X-ray diffraction, SEM, and EDAX. Traces of secondary phases such as CuFe2O4, Fe3O4, and α-Fe2O3 having peaks very close to that of the host CuO were identified from the Rietveld profile analysis and the SAED pattern of bulk and nanocrystalline Cu0.98Fe0.02O samples. Vibrating Sample Magnetometer (VSM) measurements show hysteresis at 300 K for all the samples. The ferrimagnetic Neel transition temperature () was found to be around 465°C irrespective of the content of “Fe”, which is close to the value of cubic CuFe2O4. High-pressure X-Ray diffraction studies were performed on 2% “Fe”-doped bulk CuO using synchrotron radiation. From the absence of any strong new peaks at high pressure, it is evident that the secondary phases if present could be less than the level of detection. Cu2O, which is diamagnetic by nature, was also doped with 1% of “Fe” and was found to show paramagnetic behavior in contrast to the “Fe” doped CuO. Hence the possibility of intrinsic magnetization of “Fe”-doped CuO apart from the secondary phases is discussed based on the magnetization and charge state of “Fe” and the host into which it is substituted.
Resumo:
In recent years, a surprising new phenomenon has emerged in which globally-distributed online communities collaborate to create useful and sophisticated computer software. These open source software groups are comprised of generally unaffiliated individuals and organizations who work in a seemingly chaotic fashion and who participate on a voluntary basis without direct financial incentive. The purpose of this research is to investigate the relationship between the social network structure of these intriguing groups and their level of output and activity, where social network structure is defined as 1) closure or connectedness within the group, 2) bridging ties which extend outside of the group, and 3) leader centrality within the group. Based on well-tested theories of social capital and centrality in teams, propositions were formulated which suggest that social network structures associated with successful open source software project communities will exhibit high levels of bridging and moderate levels of closure and leader centrality. The research setting was the SourceForge hosting organization and a study population of 143 project communities was identified. Independent variables included measures of closure and leader centrality defined over conversational ties, along with measures of bridging defined over membership ties. Dependent variables included source code commits and software releases for community output, and software downloads and project site page views for community activity. A cross-sectional study design was used and archival data were extracted and aggregated for the two-year period following the first release of project software. The resulting compiled variables were analyzed using multiple linear and quadratic regressions, controlling for group size and conversational volume. Contrary to theory-based expectations, the surprising results showed that successful project groups exhibited low levels of closure and that the levels of bridging and leader centrality were not important factors of success. These findings suggest that the creation and use of open source software may represent a fundamentally new socio-technical development process which disrupts the team paradigm and which triggers the need for building new theories of collaborative development. These new theories could point towards the broader application of open source methods for the creation of knowledge-based products other than software.
Resumo:
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero-and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent a of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
Resumo:
Ce cahier est basé sur la réflexion autobiographique de deux chercheuses ayant effectué des recherches qualitatives et ethnographiques, de 2008 à 2014, en Asie centrale et du Sud-‐Est. Les expériences sur le terrain constituent des moyens de comparaisons dans le présent document. En mettant l’accent sur le positionnement sur le terrain, l’étude montre que, d’abord, il est essentiel de détenir une poste intermédiaire et de parler une langue locale afin de garantir un accès et de mener des activités de recherche sur le terrain. Deuxièmement, différentes régions prédéterminent des contextes culturels et politiques ponctuels qui, à leur tour, façonneraient la recherche en sciences sociales. Troisièmement, le fait d’être une femme présente à la fois des avantages et des inconvénients. Enfin, en termes de méthodologie, les stages et les entrevues se sont avérés des méthodes fiables pour la collecte des données empiriques sur les régions ci-‐dessus mentionnées, sans pour autant permettre de bâtir la confiance.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Coronary heart disease is the major cause of morbidity and mortality throughout the world, and is responsible for approximately one of every six deaths in the US. Angina pectoris is a clinical syndrome characterized by discomfort, typically in the chest, neck, chin, or left arm, induced by physical exertion, emotional stress, or cold, and relieved by rest or nitroglycerin. The main goals of treatment of stable angina pectoris are to improve quality of life by reducing the severity and/or frequency of symptoms, to increase functional capacity, and to improve prognosis. Ranolazine is a recently developed antianginal with unique methods of action. In this paper, we review the pharmacology of ranolazine, clinical trials supporting its approval for clinical use, and studies of its quality of life benefits. We conclude that ranolazine has been shown to be a reasonable and safe option for patients who have refractory ischemic symptoms despite the use of standard medications (for example, nitrates, beta-adrenergic receptor antagonists, and calcium channel antagonists) for treatment of anginal symptoms, and also provides a modestly improved quality of life.
Resumo:
BACKGROUND: Fluid resuscitation is a cornerstone of intensive care treatment, yet there is a lack of agreement on how various types of fluids should be used in critically ill patients with different disease states. Therefore, our goal was to investigate the practice patterns of fluid utilization for resuscitation of adult patients in intensive care units (ICUs) within the USA. METHODS: We conducted a cross-sectional online survey of 502 physicians practicing in medical and surgical ICUs. Survey questions were designed to assess clinical decision-making processes for 3 types of patients who need volume expansion: (1) not bleeding and not septic, (2) bleeding but not septic, (3) requiring resuscitation for sepsis. First-choice fluid used in fluid boluses for these 3 patient types was requested from the respondents. Descriptive statistics were performed using a Kruskal-Wallis test to evaluate differences among the physician groups. Follow-up tests, including t tests, were conducted to evaluate differences between ICU types, hospital settings, and bolus volume. RESULTS: Fluid resuscitation varied with respect to preferences for the factors to determine volume status and preferences for fluid types. The 3 most frequently preferred volume indicators were blood pressure, urine output, and central venous pressure. Regardless of the patient type, the most preferred fluid type was crystalloid, followed by 5 % albumin and then 6 % hydroxyethyl starches (HES) 450/0.70 and 6 % HES 600/0.75. Surprisingly, up to 10 % of physicians still chose HES as the first choice of fluid for resuscitation in sepsis. The clinical specialty and the practice setting of the treating physicians also influenced fluid choices. CONCLUSIONS: Practice patterns of fluid resuscitation varied in the USA, depending on patient characteristics, clinical specialties, and practice settings of the treating physicians.
Resumo:
First-order transitions of system where both lattice site occupancy and lattice spacing fluctuate, such as cluster crystals, cannot be efficiently studied by traditional simulation methods, which necessarily fix one of these two degrees of freedom. The difficulty, however, can be surmounted by the generalized [N]pT ensemble [J. Chem. Phys. 136, 214106 (2012)]. Here we show that histogram reweighting and the [N]pT ensemble can be used to study an isostructural transition between cluster crystals of different occupancy in the generalized exponential model of index 4 (GEM-4). Extending this scheme to finite-size scaling studies also allows us to accurately determine the critical point parameters and to verify that it belongs to the Ising universality class.
Resumo:
OBJECTIVE: To demonstrate the application of causal inference methods to observational data in the obstetrics and gynecology field, particularly causal modeling and semi-parametric estimation. BACKGROUND: Human immunodeficiency virus (HIV)-positive women are at increased risk for cervical cancer and its treatable precursors. Determining whether potential risk factors such as hormonal contraception are true causes is critical for informing public health strategies as longevity increases among HIV-positive women in developing countries. METHODS: We developed a causal model of the factors related to combined oral contraceptive (COC) use and cervical intraepithelial neoplasia 2 or greater (CIN2+) and modified the model to fit the observed data, drawn from women in a cervical cancer screening program at HIV clinics in Kenya. Assumptions required for substantiation of a causal relationship were assessed. We estimated the population-level association using semi-parametric methods: g-computation, inverse probability of treatment weighting, and targeted maximum likelihood estimation. RESULTS: We identified 2 plausible causal paths from COC use to CIN2+: via HPV infection and via increased disease progression. Study data enabled estimation of the latter only with strong assumptions of no unmeasured confounding. Of 2,519 women under 50 screened per protocol, 219 (8.7%) were diagnosed with CIN2+. Marginal modeling suggested a 2.9% (95% confidence interval 0.1%, 6.9%) increase in prevalence of CIN2+ if all women under 50 were exposed to COC; the significance of this association was sensitive to method of estimation and exposure misclassification. CONCLUSION: Use of causal modeling enabled clear representation of the causal relationship of interest and the assumptions required to estimate that relationship from the observed data. Semi-parametric estimation methods provided flexibility and reduced reliance on correct model form. Although selected results suggest an increased prevalence of CIN2+ associated with COC, evidence is insufficient to conclude causality. Priority areas for future studies to better satisfy causal criteria are identified.
Resumo:
A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical.
In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications.
We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.
Resumo:
Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.
Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.
In this work, we propose two methods for improving the efficiency of free energy calculations.
First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.
We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.
Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.
We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.
Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.
Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.
Resumo:
Ce cahier est basé sur la réflexion autobiographique de deux chercheuses ayant effectué des recherches qualitatives et ethnographiques, de 2008 à 2014, en Asie centrale et du Sud-‐Est. Les expériences sur le terrain constituent des moyens de comparaisons dans le présent document. En mettant l’accent sur le positionnement sur le terrain, l’étude montre que, d’abord, il est essentiel de détenir une poste intermédiaire et de parler une langue locale afin de garantir un accès et de mener des activités de recherche sur le terrain. Deuxièmement, différentes régions prédéterminent des contextes culturels et politiques ponctuels qui, à leur tour, façonneraient la recherche en sciences sociales. Troisièmement, le fait d’être une femme présente à la fois des avantages et des inconvénients. Enfin, en termes de méthodologie, les stages et les entrevues se sont avérés des méthodes fiables pour la collecte des données empiriques sur les régions ci-‐dessus mentionnées, sans pour autant permettre de bâtir la confiance.
Resumo:
Critical bed shear stress for incipient motion has been determined for biogenic free-living coralline algae known as maërl. Maërl from three different sedimentary environments (beach, intertidal, and open marine) in Galway Bay, west of Ireland have been analysed in a rotating annular flume and linear flume. Velocity profile measurements of the benthic boundary layer, using an Acoustic Doppler Velocimeter, have been obtained in four different velocity experiments. The bed shear stress has been determined using three methods: Law of the Wall, Turbulent Kinetic Energy and Reynolds Stress. The critical Shields parameter has been estimated as a non-dimensional mobility number and the results have been compared with the Shields curve for natural sand. Maërl particles fall below this curve because its greater angularity allows grains to be mobilised easier than hydraulically equivalent particles. From previous work, the relationship between grain shape and the settling velocity of maërl suggests that the roughness is greatest for intertidal maërl particles. During critical shear stress determinations, beds of such rough particles exhibited the greatest critical shear stress probably because the particle thalli interlocked and resisted entrainment. The Turbulent Kinetic Energy methodology gives the most consistent results, agreeing with previous comparative studies. Rarely-documented maërl megaripples were observed in the rotating annular flume and are hypothesised to form at velocities ~10 cm s-1 higher than the critical threshold velocity, where tidal currents, oscillatory flow or combined-wave current interaction results in the preferential transport of maërl. A determination of the critical bed shear stress of maërl allows its mobility and rate of erosion and deposition to be evaluated spatially in subsequent applications to biological conservation management.