897 resultados para lifetime of isomer


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a strategy to predict the lifetime of rails subjected to large rolling contact loads that induce ratchetting strains in the rail head. A critical element concept is used to calculate the number of loading cycles needed for crack initiation to occur in the rail head surface. In this technique the finite element method (FEM) is used to determine the maximum equivalent ratchetting strain per load cycle, which is calculated by combining longitudinal and shear stains in the critical element. This technique builds on a previously developed critical plane concept that has been used to calculate the number of cycles to crack initiation in rolling contact fatigue under ratchetting failure conditions. The critical element concept simplifies the analytical difficulties of critical plane analysis. Finite element analysis (FEA) is used to identify the critical element in the mesh, and then the strain values of the critical element are used to calculate the ratchetting rate analytically. Finally, a ratchetting criterion is used to calculate the number of cycles to crack initiation from the ratchetting rate calculated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this on-going research is to interrogate the era of colonialism in Australia (1896-1966) and the denial of paid employment of Aboriginal women. The 1897 Aborigines Protection and the Restriction of the Sale of Opium Act witnessed thousands of Aboriginal people placed on Government run reserves and missions. This resulted in all aspects of their lives being controlled through state mechanisms. Under various Acts of Parliament, Aboriginal women were sent to privately owned properties to be utilised as ‘domestic servants’ through a system of forced indentured labour, which continued until the 1970’s. This paper discusses the hidden histories of these women through the use of primary sources documents including records from the Australian Department of Native Affairs and Department of Home and Health. This social history research reveals that the practice of removing Aboriginal women from their families at the age of 12 or 13 and to white families was more common practice than not. These women were often: not paid, worked up to 15 hour days, not allowed leave and subjected to many forms of abuse. Wages that were meant to be paid were re-directed to other others, including the Government. Whilst the retrieval of these ‘stolen wages’ is now an on-going issue resulting in the Queensland Government in 2002 offering AUS $2,000 to $4,000 in compensation for a lifetime of work, Aboriginal women were also asked to waive their legal right to further compensation. There are few documented histories of these Aboriginal women as told through the archives. This hidden Aboriginal Australian women’s history needs to be revealed to better understand the experiences and depth of misappropriation of Aboriginal women as domestic workers. In doing so, it also reveals a more accurate reflection of women’s work in Australia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanosecond dynamics of two separated discharge cycles in an asymmetric dielectric barrier discharge is studied using time-resolved current and voltage measurements synchronized with high-speed (∼5 ns) optical imaging. Nanosecond dc pulses with tailored raise and fall times are used to generate solitary filamentary structures (SFSs) during the first cycle and a uniform glow during the second. The SFSs feature ∼1.5 mm thickness, ∼1.9 A peak current, and a lifetime of several hundred nanoseconds, at least an order of magnitude larger than in common microdischarges. This can be used in alternating localized and uniform high-current plasma treatments in various applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the photoexcited state dynamics in a donor-acceptor copolymer, poly{3,6-dithiophene-2-yl-2,5-di(2-octyldodecyl)-pyrrolo[3,4-c]- pyrrole-1,4-dione-alt-naphthalene} (pDPP-TNT), by picosecond fluorescence and femtosecond transient absorption spectroscopies. Timeresolved fluorescence lifetime measurements of pDPP-TNT thin films reveal that the lifetime of the singlet excited state is 185 ± 5 ps and that singlet-singlet annihilation occurs at excitation photon densities above 6 × 1017 photons/cm3. From the results of singlet-singlet annihilation analysis, we estimate that the single-singlet annihilation rate constant is (6.0 ± 0.2) × 109cm3 s-1 and the singlet diffusion length is -7 nm. From the comparison of femtosecond transient absorption measurements and picosecond fluorescence measurements, it is found that the time profile of the photobleaching signal in the charge-transfer (CT) absorption band coincides with that of the fluorescence intensity and there is no indication of long-lived species, which clearly suggests that charged species, such as polaron pairs and triplet excitons, are not effectively photogenerated in the neat pDPP-TNT polymer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wireless adhoc networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem - the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node. In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels, where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information (CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a c- - orresponding factored class of control poli.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In its October 2003 report on the definition of disability used by the Social Security Administration’s (SSA’s) disability programs [i.e., Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) for people with disabilities], the Social Security Advisory Board raises the issue of whether this definition is at odds with the concept of disability embodied in the Americans with Disabilities Act (ADA) and, more importantly, with the aspirations of people with disabilities to be full participants in mainstream social activities and lead fulfilling, productive lives. The Board declares that “the Nation must face up to the contradictions created by the existing definition of disability.” I wholeheartedly agree. Further, I have concluded that we have to make fundamental, conceptual changes to both how we define eligibility for economic security benefits, and how we provide those benefits, if we are ever to fulfill the promise of the ADA. To convince you of that proposition, I will begin by relating a number of facts that paint a very bleak picture – a picture of deterioration in the economic security of the population that the disability programs are intended to serve; a picture of programs that purport to provide economic security, but are themselves financially insecure and subject to cycles of expansion and cuts that undermine their purpose; a picture of programs that are facing their biggest expenditure crisis ever; and a picture of an eligibility determination process that is inefficient and inequitable -- one that rations benefits by imposing high application costs on applicants in an arbitrary fashion. I will then argue that the fundamental reason for this bleak picture is the conceptual definition of eligibility that these programs use – one rooted in a disability paradigm that social scientists, people with disabilities, and, to a substantial extent, the public have rejected as being flawed, most emphatically through the passage of the ADA. Current law requires eligibility rules to be based on the premise that disability is medically determinable. That’s wrong because, as the ADA recognizes, a person’s environment matters. I will further argue that programs relying on this eligibility definition must inevitably: reward people if they do not try to help themselves, but not if they do; push the people they serve out of society’s mainstream, fostering a culture of isolation and dependency; relegate many to a lifetime of poverty; and undermine their promise of economic security because of the periodic “reforms” that are necessary to maintain taxpayer support. I conclude by pointing out that to change the conceptual definition for program eligibility, we also must change our whole approach to providing for the economic security of people with disabilities. We need to replace our current “caretaker” approach with one that emphasizes helping people with disabilities help themselves. I will briefly describe features that such a program might require, and point out the most significant challenges we would face in making the transition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the proliferation of wireless and mobile devices equipped with multiple radio interfaces to connect to the Internet, vertical handoff involving different wireless access technologies will enable users to get the best of connectivity and service quality during the lifetime of a TCP connection. A vertical handoff may introduce an abrupt, significant change in the access link characteristics and as a result the end-to-end path characteristics such as the bandwidth and the round-trip time (RTT) of a TCP connection may change considerably. TCP may take several RTTs to adapt to these changes in path characteristics and during this interval there may be packet losses and / or inefficient utilization of the available bandwidth. In this thesis we study the behaviour and performance of TCP in the presence of a vertical handoff. We identify the different handoff scenarios that adversely affect TCP performance. We propose several enhancements to the TCP sender algorithm that are specific to the different handoff scenarios to adapt TCP better to a vertical handoff. Our algorithms are conservative in nature and make use of cross-layer information obtained from the lower layers regarding the characteristics of the access links involved in a handoff. We evaluate the proposed algorithms by extensive simulation of the various handoff scenarios involving access links with a wide range of bandwidth and delay. We show that the proposed algorithms are effective in improving the TCP behaviour in various handoff scenarios and do not adversely affect the performance of TCP in the absence of cross-layer information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The continuous production of blood cells, a process termed hematopoiesis, is sustained throughout the lifetime of an individual by a relatively small population of cells known as hematopoietic stem cells (HSCs). HSCs are unique cells characterized by their ability to self-renew and give rise to all types of mature blood cells. Given their high proliferative potential, HSCs need to be tightly regulated on the cellular and molecular levels or could otherwise turn malignant. On the other hand, the tight regulatory control of HSC function also translates into difficulties in culturing and expanding HSCs in vitro. In fact, it is currently not possible to maintain or expand HSCs ex vivo without rapid loss of self-renewal. Increased knowledge of the unique features of important HSC niches and of key transcriptional regulatory programs that govern HSC behavior is thus needed. Additional insight in the mechanisms of stem cell formation could enable us to recapitulate the processes of HSC formation and self-renewal/expansion ex vivo with the ultimate goal of creating an unlimited supply of HSCs from e.g. human embryonic stem cells (hESCs) or induced pluripotent stem cells (iPS) to be used in therapy. We thus asked: How are hematopoietic stem cells formed and in what cellular niches does this happen (Papers I, II)? What are the molecular mechanisms that govern hematopoietic stem cell development and differentiation (Papers III, IV)? Importantly, we could show that placenta is a major fetal hematopoietic niche that harbors a large number of HSCs during midgestation (Paper I)(Gekas et al., 2005). In order to address whether the HSCs found in placenta were formed there we utilized the Runx1-LacZ knock-in and Ncx1 knockout mouse models (Paper II). Importantly, we could show that HSCs emerge de novo in the placental vasculature in the absence of circulation (Rhodes et al., 2008). Furthermore, we could identify defined microenvironmental niches within the placenta with distinct roles in hematopoiesis: the large vessels of the chorioallantoic mesenchyme serve as sites of HSC generation whereas the placental labyrinth is a niche supporting HSC expansion (Rhodes et al., 2008). Overall, these studies illustrate the importance of distinct milieus in the emergence and subsequent maturation of HSCs. To ensure proper function of HSCs several regulatory mechanisms are in place. The microenvironment in which HSCs reside provides soluble factors and cell-cell interactions. In the cell-nucleus, these cell-extrinsic cues are interpreted in the context of cell-intrinsic developmental programs which are governed by transcription factors. An essential transcription factor for initiation of hematopoiesis is Scl/Tal1 (stem cell leukemia gene/T-cell acute leukemia gene 1). Loss of Scl results in early embryonic death and total lack of all blood cells, yet deactivation of Scl in the adult does not affect HSC function (Mikkola et al., 2003b. In order to define the temporal window of Scl requirement during fetal hematopoietic development, we deactivated Scl in all hematopoietic lineages shortly after hematopoietic specification in the embryo . Interestingly, maturation, expansion and function of fetal HSCs was unaffected, and, as in the adult, red blood cell and platelet differentiation was impaired (Paper III)(Schlaeger et al., 2005). These findings highlight that, once specified, the hematopoietic fate is stable even in the absence of Scl and is maintained through mechanisms that are distinct from those required for the initial fate choice. As the critical downstream targets of Scl remain unknown, we sought to identify and characterize target genes of Scl (Paper IV). We could identify transcription factor Mef2C (myocyte enhancer factor 2 C) as a novel direct target gene of Scl specifically in the megakaryocyte lineage which largely explains the megakaryocyte defect observed in Scl deficient mice. In addition, we observed an Scl-independent requirement of Mef2C in the B-cell compartment, as loss of Mef2C leads to accelerated B-cell aging (Gekas et al. Submitted). Taken together, these studies identify key extracellular microenvironments and intracellular transcriptional regulators that dictate different stages of HSC development, from emergence to lineage choice to aging.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a single-hop data-gathering sensor network, consisting of a set of sensor nodes that transmit data periodically to a base-station. We are interested in maximizing the lifetime of this network. With our definition of network lifetime and the assumption that the radio transmission energy consumption forms the most significant portion of the total energy consumption at a sensor node, we attempt to enhance the network lifetime by reducing the transmission energy budget of sensor nodes by exploiting three system-level opportunities. We pose the problem of maximizing lifetime as a max-min optimization problem subject to the constraint of successful data collection and limited energy supply at each node. This turns out to be an extremely difficult optimization to solve. To reduce the complexity of this problem, we allow the sensor nodes and the base-station to interactively communicate with each other and employ instantaneous decoding at the base-station. The chief contribution of the paper is to show that the computational complexity of our problem is determined by the complex interplay of various system-level opportunities and challenges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we report a systematic study of low frequency 1∕fα resistance fluctuation in thin metal films (Ag on Si) at different stages of damage process when the film is subjected to high current stressing. The resistance fluctuation (noise) measurement was carried out in situ using a small ac bias that has been mixed with the dc stressing current. The experiment has been carried out as a function of temperature in the range of 150–350 K. The experiment establishes that the current stressed film, as it undergoes damage due to various migration forces, develops an additional low-frequency noise spectral power that does not have the usual 1∕f spectral shape. The magnitude of extra term has an activated temperature dependence (activation energy of ≈0.1 eV) and has a 1∕f1.5 spectral dependence. The activation energy is the same as seen from the temperature dependence of the lifetime of the film. The extra 1∕f1.5 spectral power changes the spectral shape of the noise power as the damage process progress. The extra term likely arising from diffusion starts in the early stage of the migration process during current stressing and is noticeable much before any change can be detected in simultaneous resistance measurements. The experiment carried out over a large temperature range establish a strong correlation between the evolution of the migration process in a current stressed film and the low-frequency noise component that is not a 1∕f noise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices, we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: (1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, (2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which (3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time), based on the sensor observations obtained until time slot k. Our results show that an optimum closed loop control on Mk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Controlled nuclear fusion is one of the most promising sources of energy for the future. Before this goal can be achieved, one must be able to control the enormous energy densities which are present in the core plasma in a fusion reactor. In order to be able to predict the evolution and thereby the lifetime of different plasma facing materials under reactor-relevant conditions, the interaction of atoms and molecules with plasma first wall surfaces have to be studied in detail. In this thesis, the fundamental sticking and erosion processes of carbon-based materials, the nature of hydrocarbon species released from plasma-facing surfaces, and the evolution of the components under cumulative bombardment by atoms and molecules have been investigated by means of molecular dynamics simulations using both analytic potentials and a semi-empirical tight-binding method. The sticking cross-section of CH3 radicals at unsaturated carbon sites at diamond (111) surfaces is observed to decrease with increasing angle of incidence, a dependence which can be described by a simple geometrical model. The simulations furthermore show the sticking cross-section of CH3 radicals to be strongly dependent on the local neighborhood of the unsaturated carbon site. The erosion of amorphous hydrogenated carbon surfaces by helium, neon, and argon ions in combination with hydrogen at energies ranging from 2 to 10 eV is studied using both non-cumulative and cumulative bombardment simulations. The results show no significant differences between sputtering yields obtained from bombardment simulations with different noble gas ions. The final simulation cells from the 5 and 10 eV ion bombardment simulations, however, show marked differences in surface morphology. In further simulations the behavior of amorphous hydrogenated carbon surfaces under bombardment with D^+, D^+2, and D^+3 ions in the energy range from 2 to 30 eV has been investigated. The total chemical sputtering yields indicate that molecular projectiles lead to larger sputtering yields than atomic projectiles. Finally, the effect of hydrogen ion bombardment of both crystalline and amorphous tungsten carbide surfaces is studied. Prolonged bombardment is found to lead to the formation of an amorphous tungsten carbide layer, regardless of the initial structure of the sample. In agreement with experiment, preferential sputtering of carbon is observed in both the cumulative and non-cumulative simulations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices,we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: 1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, 2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which 3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time),based on the sensor observations obtained until time slot k.Our results show that an optimum closed loop control onMk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.