426 resultados para Low.
Resumo:
In this thesis various schemes using custom power devices for power quality improvement in low voltage distribution network are studied. Customer operated distributed generators makes a typical network non-radial and affect the power quality. A scheme considering different algorithm of DSTATCOM is proposed for power circulation and islanded operation of the system. To compensate reactive power overflow and facilitate unity power factor, a UPQC is introduced. Stochastic analysis is carried out for different scenarios to get a comprehensive idea about a real life distribution network. Combined operation of static compensator and voltage regulator is tested for the optimum quality and stability of the system.
Resumo:
Post traumatic stress disorder (PTSD) is a serious medical condition effecting both military and civilian populations. While its etiology remains poorly understood it is characterized by high and prolonged levels of fear responding. One biological unknown is whether individuals expressing high or low conditioned fear memory encode the memory differently and if that difference underlies fear response. In this study we examined cellular mechanisms that underlie high and low conditioned fear behavior by using an advanced intercrossed mouse line (B6D2F1) selected for high and low Pavlovian fear response. A known requirement for consolidation of fear memory, phosphorylated mitogen activated protein kinase (p44/42 (ERK) MAPK (pMAPK)) in the lateral amygdala (LA) is a reliable marker of fear learning-related plasticity. In this study, we asked whether high and low conditioned fear behavior is associated with differential pMAPK expression in the LA and if so, is it due to an increase in neurons expressing pMAPK or increased pMAPK per neuron. To examine this, we quantified pMAPK-expressing neurons in the LA at baseline and following Pavlovian fear conditioning. Results indicate that high fear phenotype mice have more pMAPK-expressing neurons in the LA. This finding suggests that increased endogenous plasticity in the LA may be a component of higher conditioned fear responses and begins to explain at the cellular level how different fear responders encode fear memories. Understanding how high and low fear responders encode fear memory will help identify novel ways in which fear-related illness risk can be better predicted and treated.
Resumo:
The 510 million year old Kalkarindji Large Igneous Province correlates in time with the first major extinction event after the Cambrian explosion of life. Large igneous provinces correlate with all major mass extinction events in the last 500 million years. The genetic link between large igneous provinces and mass extinction remains unclear. My work is a contribution towards understanding magmatic processes involved in the generation of Large Igneous Provinces. I concentrate on the origin of variation in Cr in magmas and have developed a model in which high temperature melts intrude into and assimilate large amounts of upper continental crust.
Resumo:
Ever growing populations in cities are associated with a major increase in road vehicles and air pollution. The overall high levels of urban air pollution have been shown to be of a significant risk to city dwellers. However, the impacts of very high but temporally and spatially restricted pollution, and thus exposure, are still poorly understood. Conventional approaches to air quality monitoring are based on networks of static and sparse measurement stations. However, these are prohibitively expensive to capture tempo-spatial heterogeneity and identify pollution hotspots, which is required for the development of robust real-time strategies for exposure control. Current progress in developing low-cost micro-scale sensing technology is radically changing the conventional approach to allow real-time information in a capillary form. But the question remains whether there is value in the less accurate data they generate. This article illustrates the drivers behind current rises in the use of low-cost sensors for air pollution management in cities, whilst addressing the major challenges for their effective implementation.
Resumo:
Abstract Background The purpose of this study was the development of a valid and reliable “Mechanical and Inflammatory Low Back Pain Index” (MIL) for assessment of non-specific low back pain (NSLBP). This 7-item tool assists practitioners in determining whether symptoms are predominantly mechanical or inflammatory. Methods Participants (n = 170, 96 females, age = 38 ± 14 years-old) with NSLP were referred to two Spanish physiotherapy clinics and completed the MIL and the following measures: the Roland Morris Questionnaire (RMQ), SF-12 and “Backache Index” (BAI) physical assessment test. For test-retest reliability, 37 consecutive patients were assessed at baseline and three days later during a non-treatment period. Face and content validity, practical characteristics, factor analysis, internal consistency, discriminant validity and convergent validity were assessed from the full sample. Results A total of 27 potential items that had been identified for inclusion were subsequently reduced to 11 by an expert panel. Four items were then removed due to cross-loading under confirmatory factor analysis where a two-factor model yielded a good fit to the data (χ2 = 14.80, df = 13, p = 0.37, CFI = 0.98, and RMSEA = 0.029). The internal consistency was moderate (α = 0.68 for MLBP; 0.72 for ILBP), test-retest reliability high (ICC = 0.91; 95%CI = 0.88-0.93) and discriminant validity good for either MLBP (AUC = 0.74) and ILBP (AUC = 0.92). Convergent validity was demonstrated through similar but weak correlations between the ILBP and both the RMQ and BAI (r = 0.34, p < 0.001) and the MLBP and BAI (r = 0.38, p < 0.001). Conclusions The MIL is a valid and reliable clinical tool for patients with NSLBP that discriminates between mechanical and inflammatory LBP. Keywords: Low back pain; Psychometrics properties; Pain measurement; Screening tool; Inflammatory; Mechanical
Resumo:
So far, low probability differentials for the key schedule of block ciphers have been used as a straightforward proof of security against related-key differential analysis. To achieve resistance, it is believed that for cipher with k-bit key it suffices the upper bound on the probability to be 2− k . Surprisingly, we show that this reasonable assumption is incorrect, and the probability should be (much) lower than 2− k . Our counter example is a related-key differential analysis of the well established block cipher CLEFIA-128. We show that although the key schedule of CLEFIA-128 prevents differentials with a probability higher than 2− 128, the linear part of the key schedule that produces the round keys, and the Feistel structure of the cipher, allow to exploit particularly chosen differentials with a probability as low as 2− 128. CLEFIA-128 has 214 such differentials, which translate to 214 pairs of weak keys. The probability of each differential is too low, but the weak keys have a special structure which allows with a divide-and-conquer approach to gain an advantage of 27 over generic analysis. We exploit the advantage and give a membership test for the weak-key class and provide analysis of the hashing modes. The proposed analysis has been tested with computer experiments on small-scale variants of CLEFIA-128. Our results do not threaten the practical use of CLEFIA.
Resumo:
We consider online trading in a single security with the objective of getting rich when its price ever exhibits a large upcrossing, without risking bankruptcy. We investigate payoff guarantees that are expressed in terms of the extremity of the upcrossings. We obtain an exact and elegant characterisation of the guarantees that can be achieved. Moreover, we derive a simple canonical strategy for each attainable guarantee.
Resumo:
In a study of socioeconomically disadvantaged children's acquisition of school literacies, a university research team investigated how a group of teachers negotiated critical literacies and explored notions of social power with elementary children in a suburban school located in an area of high poverty. Here we focus on a grade 2/3 classroom where the teacher and children became involved in a local urban renewal project and on how in the process the children wrote about place and power. Using the students' concerns about their neighborhood, the teacher engaged her class in a critical literacy project that not only involved a complex set of literate practices but also taught the children about power and the possibilities for local civic action. In particular, we discuss examples of children's drawing and writing about their neighborhoods and their lives. We explore how children's writing and drawing might be key elements in developing "critical literacies" in elementary school settings. We consider how such classroom writing can be a mediator of emotions, intellectual and academic learning, social practice, and political activism.
Resumo:
The research explores how community participation can address affordable housing problems of the poor in Dhaka. The research, based on extensive interviews, community focus groups and household surveys in different Dhaka slums, identifies the limiting factors to promote community participation in affordable housing creation. In Dhaka housing options for poor are currently limited to affordable shelters in informal settlements. Public housing programs have failed to reach the poor and meet affordability levels due to a number of factors including lack of beneficiary participation. Beneficiary participation, though widely recognized for success in housing initiatives, often deteriorates in process of implementation into mere involvement, not reflecting community needs and aspirations and thus failing to meet its core objectives. This research identified the most significant impediments as well as opportunities to advance participation in their own housing provisions in Dhaka city.
Resumo:
Two studies documented the “David and Goliath” rule—the tendency for people to perceive criticism of “David” groups (groups with low power and status) as less normatively permissible than criticism of “Goliath” groups (groups with high power and status). The authors confirmed the existence of the David and Goliath rule across Western and Chinese cultures (Study 1). However, the rule was endorsed more strongly in Western than in Chinese cultures, an effect mediated by cultural differences in power distance. Study 2 identified the psychological underpinnings of this rule in an Australian sample. Lower social dominance orientation (SDO) was associated with greater endorsement of the rule, an effect mediated through the differential attribution of stereotypes. Specifically, those low in SDO were more likely to attribute traits of warmth and incompetence to David versus Goliath groups, a pattern of stereotypes that was related to the protection of David groups from criticism.
Resumo:
Nanotubes and nanosheets are low-dimensional nanomaterials with unique properties that can be exploited for numerous applications. This book offers a complete overview of their structure, properties, development, modeling approaches, and practical use. It focuses attention on boron nitride (BN) nanotubes, which have had major interest given their special high-temperature properties, as well as graphene nanosheets, BN nanosheets, and metal oxide nanosheets. Key topics include surface functionalization of nanotubes for composite applications, wetting property changes for biocompatible environments, and graphene for energy storage applications
Resumo:
Background The expression of biomass-degrading enzymes (such as cellobiohydrolases) in transgenic plants has the potential to reduce the costs of biomass saccharification by providing a source of enzymes to supplement commercial cellulase mixtures. Cellobiohydrolases are the main enzymes in commercial cellulase mixtures. In the present study, a cellobiohydrolase was expressed in transgenic corn stover leaf and assessed as an additive for two commercial cellulase mixtures for the saccharification of pretreated sugar cane bagasse obtained by different processes. Results Recombinant cellobiohydrolase in the senescent leaves of transgenic corn was extracted using a simple buffer with no concentration step. The extract significantly enhanced the performance of Celluclast 1.5 L (a commercial cellulase mixture) by up to fourfold on sugar cane bagasse pretreated at the pilot scale using a dilute sulfuric acid steam explosion process compared to the commercial cellulase mixture on its own. Also, the extracts were able to enhance the performance of Cellic CTec2 (a commercial cellulase mixture) up to fourfold on a range of residues from sugar cane bagasse pretreated at the laboratory (using acidified ethylene carbonate/ethylene glycol, 1-butyl-3-methylimidazolium chloride, and ball-milling) and pilot (dilute sodium hydroxide and glycerol/hydrochloric acid steam explosion) scales. We have demonstrated using tap water as a solvent (under conditions that mimic an industrial process) extraction of about 90% recombinant cellobiohydrolase from senescent, transgenic corn stover leaf that had minimal tissue disruption. Conclusions The accumulation of recombinant cellobiohydrolase in senescent, transgenic corn stover leaf is a viable strategy to reduce the saccharification cost associated with the production of fermentable sugars from pretreated biomass. We envisage an industrial-scale process in which transgenic plants provide both fibre and biomass-degrading enzymes for pretreatment and enzymatic hydrolysis, respectively.
Resumo:
Map-matching algorithms that utilise road segment connectivity along with other data (i.e.position, speed and heading) in the process of map-matching are normally suitable for high frequency (1 Hz or higher) positioning data from GPS. While applying such map-matching algorithms to low frequency data (such as data from a fleet of private cars, buses or light duty vehicles or smartphones), the performance of these algorithms reduces to in the region of 70% in terms of correct link identification, especially in urban and sub-urban road networks. This level of performance may be insufficient for some real-time Intelligent Transport System (ITS) applications and services such as estimating link travel time and speed from low frequency GPS data. Therefore, this paper develops a new weight-based shortest path and vehicle trajectory aided map-matching (stMM) algorithm that enhances the map-matching of low frequency positioning data on a road map. The well-known A* search algorithm is employed to derive the shortest path between two points while taking into account both link connectivity and turn restrictions at junctions. In the developed stMM algorithm, two additional weights related to the shortest path and vehicle trajectory are considered: one shortest path-based weight is related to the distance along the shortest path and the distance along the vehicle trajectory, while the other is associated with the heading difference of the vehicle trajectory. The developed stMM algorithm is tested using a series of real-world datasets of varying frequencies (i.e. 1 s, 5 s, 30 s, 60 s sampling intervals). A high-accuracy integrated navigation system (a high-grade inertial navigation system and a carrier-phase GPS receiver) is used to measure the accuracy of the developed algorithm. The results suggest that the algorithm identifies 98.9% of the links correctly for every 30 s GPS data. Omitting the information from the shortest path and vehicle trajectory, the accuracy of the algorithm reduces to about 73% in terms of correct link identification. The algorithm can process on average 50 positioning fixes per second making it suitable for real-time ITS applications and services.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.
Resumo:
While the economic and environmental benefits of fisheries management are well accepted, the costs of effective management in low value fisheries, including the research necessary to underpin such management, may be considerable relative to the total economic benefits they may generate. Co-management is often seen as a panacea in low value fisheries. Increasing fisher participation increases legitimacy of management decision in the absence of detailed scientific input. However, where only a small number of operators exist, the potential benefits of co-management are negated by the high transaction cost to the individual fishers engaging in the management process. From an economic perspective, sole ownership has been identified as the management structure which can best achieve biological and economic sustainability. Moving low value fisheries with a small number of participants to a corporate-cooperative management model may come close to achieving these sole ownership benefits, with lower transaction costs. In this paper we look at the applicability of different management models with industry involvement to low value fisheries with a small number of participants. We provide an illustration as to how a fishery could be transitioned to a corporate-cooperative management model that captures the key benefits of sole management at a low cost and is consistent with societal objectives.