967 resultados para Eliminate lost time
Resumo:
521 p.
Resumo:
The composition of amorphous oxide semiconductors, which are well known for their optical transparency, can be tailored to enhance their absorption and induce photoconductivity for irradiation with green, and shorter wavelength light. In principle, amorphous oxide semiconductor-based thin-film photoconductors could hence be applied as photosensors. However, their photoconductivity persists for hours after illumination has been removed, which severely degrades the response time and the frame rate of oxide-based sensor arrays. We have solved the problem of persistent photoconductivity (PPC) by developing a gated amorphous oxide semiconductor photo thin-film transistor (photo-TFT) that can provide direct control over the position of the Fermi level in the active layer. Applying a short-duration (10 ns) voltage pulse to these devices induces electron accumulation and accelerates their recombination with ionized oxygen vacancy sites, which are thought to cause PPC. We have integrated these photo-TFTs in a transparent active-matrix photosensor array that can be operated at high frame rates and that has potential applications in contact-free interactive displays. © 2012 Macmillan Publishers Limited. All rights reserved.
Resumo:
In this paper, we investigate the remanufacturing problem of pricing single-class used products (cores) in the face of random price-dependent returns and random demand. Specifically, we propose a dynamic pricing policy for the cores and then model the problem as a continuous-time Markov decision process. Our models are designed to address three objectives: finite horizon total cost minimization, infinite horizon discounted cost, and average cost minimization. Besides proving optimal policy uniqueness and establishing monotonicity results for the infinite horizon problem, we also characterize the structures of the optimal policies, which can greatly simplify the computational procedure. Finally, we use computational examples to assess the impacts of specific parameters on optimal price and reveal the benefits of a dynamic pricing policy. © 2013 Elsevier B.V. All rights reserved.
Resumo:
OBJECTIVE - To evaluate an algorithm guiding responses of continuous subcutaneous insulin infusion (CSII)-treated type 1 diabetic patients using real-time continuous glucose monitoring (RT-CGM). RESEARCH DESIGN AND METHODS - Sixty CSII-treated type 1 diabetic participants (aged 13-70 years, including adult and adolescent subgroups, with A1C =9.5%) were randomized in age-, sex-, and A1C-matched pairs. Phase 1 was an open 16-week multicenter randomized controlled trial. Group A was treated with CSII/RT-CGM with the algorithm, and group B was treated with CSII/RT-CGM without the algorithm. The primary outcome was the difference in time in target (4-10 mmol/l) glucose range on 6-day masked CGM. Secondary outcomes were differences in A1C, low (=3.9 mmol/l) glucose CGM time, and glycemic variability. Phase 2 was the week 16-32 follow-up. Group A was returned to usual care, and group B was provided with the algorithm. Glycemia parameters were as above. Comparisons were made between baseline and 16 weeks and 32 weeks. RESULTS - In phase 1, after withdrawals 29 of 30 subjects were left in group A and 28 of 30 subjects were left in group B. The change in target glucose time did not differ between groups. A1C fell (mean 7.9% [95% CI 7.7-8.2to 7.6% [7.2-8.0]; P <0.03) in group A but not in group B (7.8% [7.5-8.1] to 7.7 [7.3-8.0]; NS) with no difference between groups. More subjects in group A achieved A1C =7% than those in group B (2 of 29 to 14 of 29 vs. 4 of 28 to 7 of 28; P = 0.015). In phase 2, one participant was lost from each group. In group A, A1C returned to baseline with RT-CGM discontinuation but did not change in group B, who continued RT-CGM with addition of the algorithm. CONCLUSIONS - Early but not late algorithm provision to type 1 diabetic patients using CSII/RT-CGM did not increase the target glucose time but increased achievement of A1C =7%. Upon RT-CGM cessation, A1C returned to baseline. © 2010 by the American Diabetes Association.
Resumo:
Kathmandu has been the last few cities in the world which retained its medieval urban culture up until twentieth century. Various Hindu and Buddhist religious practices shaped the arrangement of houses, roads and urban spaces giving the city a distinctive physical form, character and a unique oriental nativeness. In recent decades, the urban culture of the city has been changing with the forces of urbanisation and globalisation and the demand for new buildings and spaces. New residential design is increasingly dominated by distinctive patterns of Western suburban ideal comprising detached or semi-detached homes and high rise tower blocks. This architectural iconoclasm can be construed as a rather crude response to the indigenous spaces and builtform. The paper attempts to dismantle the current tension between traditional and contemporary 'culture' (and hence society) and housing (or builtform) in Kathmandu by engaging in a discussion that cuts across space, time and meaning of building. The paper concludes that residential architecture in Kathmandu today stands disoriented and lost in the transition.
Resumo:
Writing in the late 1980s, Nancy gives as examples of the "recent fashion for the sublime" not only the theoreticians of Paris, but the artists of Los Angeles, Berlin, Rome, and Tokyo. At the beginning of the twenty-first century, the sublime may of course no longer seem quite so "now" as it did back then, whether in North America, Europe, or Japan. Simon Critchley, for one, has suggested that, at least as regards the issue of its conceptual coupling to "postmodernism," the "debate" concerning the sublime "has become rather stale and the discussion has moved on." Nonetheless, if that debate has indeed "moved on"-and thankfully so-it is not without its remainder, particularly in the very contemporary context of a resurgence of interest in explicitly philosophical accounts of art, in the wake of an emergent critique of cultural studies and of the apparent waning of poststructuralism's influence-a resurgence that has led to a certain "return to aesthetics" in recent Continental philosophy and to the work of Kant, Schelling, and the German Romantics. Moreover, as Nancy's precise formulations suggest, the "fashion" [mode] through which the sublime "offers itself"-as "a break within or from aesthetics"-clearly contains a significance that Critchley's more straightforward narration of shifts in theoretical chic cannot encompass. At stake in this would be the relation between the mode of fashion and art's "destiny" within modernity itself, from the late eighteenth century onwards. Such a conception of art's "destiny," as inextricably linked to that of the sublime, is not unique to recent French theory. In a brief passage in Aesthetic Theory, Adorno also suggests that the "sublime, which Kant reserved exclusively for nature, later became the historical constituent of art itself.... [I]n a subtle way, after the fall of formal beauty, the sublime was the only aesthetic idea left to modernism." As such, although the term has its classical origins in Longinus, its historical character for "us," both Nancy and Adorno argue, associates it specifically with the emergence of the modern. As another philosopher states: "It is around this name [of the sublime] that the destiny of classical poetics was hazarded and lost; it is in this name that ... romanticism, in other words, modernity, triumphed."
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Crawford Lake is a meromictic lake, which is 24 m deep and has an area of 2.5 ha, and has never been reported to have mixed below 16 m. Lady Evelyn Lake, which became a reservoir when a dam was built in 1916, is dimictic with a maximum depth of about 35 m. 1 My research proved that both native chlorophylls and the ratio of chlorophyll derivatives to total carotenoids were better preserved in the shallower lake (Crawford Lake) because it was meromictic. Thus the anaerobic conditions in Crawford Lake below 16 m (monimolimnion) provide excellent conditions for pigment preservation. Under such conditions, the preservation of both chlorophylls and carotenoids, including oscillaxanthin and myxoxanthophyll, are extremely good compared with those of Lady Evelyn Reservoir, in which anaerobic conditions are rarely encountered at the mud-water interface. During the period from 1500 to 1900 A. D. in Crawford Lake, the accumulation rates of oscillaxanthin and myxoxanthophyll were extremely high, but those of chlorophyll derivatives and total carotenoids were relatively low. This was correlated with the presence of a dense benthic mat of cyanobacteria near the lake's chemocline. Competition for light between the deep dwelling cyanobacteria and overlying phytoplankton in this meromictic lake would have been intensified as the lake became more and more eutrophic (1955-1991 A. D.). During the period from 1955 to 1991 A. D., the accumulation rates of chlorophyll derivatives and total carotenoids in the sediment core from Crawford Lake (0-7.5 cm, 1955-present) increased. During this same period, the accumulation rates of cyanobacterial pigments (Le. oscillaxanthin and myxoxanthophyll) declined as the lake became more eutrophic. Because the major cyanobacteria in Crawford Lake are benthic mat forming Lyngbya and Oscillatoria and not phytoplankton, eutrophication resulted in a decline of the mat forming algal pigments. This is important because in previous palaeolimnological studies the concentrations of oscillaxanthin and myxoxanthophyll have been used as correlates with lake trophic levels. The results of organic carbon a13c analysis on the Crawford Lake sediment core supported the conclusions from the pigment study as noted above. High values of a13c at the depth of 34-48 cm (1500-1760 A. D.) were related to a dense population of benthic Oscillatoria and Lyngbya living on the bottom of the lake during that period. The Oscillatoria and Lyngbya utilized the bicarbonate, which had a high a 13C value. Very low values were found at 0-7 cm in the Crawford sediment core. At this time phytoplankton was the main primary producer, which enriched 12C by photosynthetic assimilation.
Resumo:
La multiplication dans le corps de Galois à 2^m éléments (i.e. GF(2^m)) est une opérations très importante pour les applications de la théorie des correcteurs et de la cryptographie. Dans ce mémoire, nous nous intéressons aux réalisations parallèles de multiplicateurs dans GF(2^m) lorsque ce dernier est généré par des trinômes irréductibles. Notre point de départ est le multiplicateur de Montgomery qui calcule A(x)B(x)x^(-u) efficacement, étant donné A(x), B(x) in GF(2^m) pour u choisi judicieusement. Nous étudions ensuite l'algorithme diviser pour régner PCHS qui permet de partitionner les multiplicandes d'un produit dans GF(2^m) lorsque m est impair. Nous l'appliquons pour la partitionnement de A(x) et de B(x) dans la multiplication de Montgomery A(x)B(x)x^(-u) pour GF(2^m) même si m est pair. Basé sur cette nouvelle approche, nous construisons un multiplicateur dans GF(2^m) généré par des trinôme irréductibles. Une nouvelle astuce de réutilisation des résultats intermédiaires nous permet d'éliminer plusieurs portes XOR redondantes. Les complexités de temps (i.e. le délais) et d'espace (i.e. le nombre de portes logiques) du nouveau multiplicateur sont ensuite analysées: 1. Le nouveau multiplicateur demande environ 25% moins de portes logiques que les multiplicateurs de Montgomery et de Mastrovito lorsque GF(2^m) est généré par des trinômes irréductible et m est suffisamment grand. Le nombre de portes du nouveau multiplicateur est presque identique à celui du multiplicateur de Karatsuba proposé par Elia. 2. Le délai de calcul du nouveau multiplicateur excède celui des meilleurs multiplicateurs d'au plus deux évaluations de portes XOR. 3. Nous determinons le délai et le nombre de portes logiques du nouveau multiplicateur sur les deux corps de Galois recommandés par le National Institute of Standards and Technology (NIST). Nous montrons que notre multiplicateurs contient 15% moins de portes logiques que les multiplicateurs de Montgomery et de Mastrovito au coût d'un délai d'au plus une porte XOR supplémentaire. De plus, notre multiplicateur a un délai d'une porte XOR moindre que celui du multiplicateur d'Elia au coût d'une augmentation de moins de 1% du nombre total de portes logiques.
Resumo:
This article for the first time considers all extant ancient evidence for the habit of carving inscriptions on tree trunks. It emerges a picture that bears remarkable resemblances to what is known from the habit of graffiti writing (with important addition to that latter field to be derived from the findings), for individual and technical texts.
Resumo:
One major assumption in all orthogonal space-time block coding (O-STBC) schemes is that the channel remains static over the length of the code word. However, time-selective fading channels do exist, and in such case conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. As a sequel to the authors' previous papers on this subject, this paper aims to eliminate the error floor of the H(i)-coded O-STBC system (i = 3 and 4) by employing the techniques of: 1) zero forcing (ZF) and 2) parallel interference cancellation (PIC). It is. shown that for an H(i)-coded system the PIC is a much better choice than the ZF in terms of both performance and computational complexity. Compared with the, conventional H(i) detector, the PIC detector incurs a moderately higher computational complexity, but this can well be justified by the enormous improvement.
Resumo:
The Sun's open magnetic field, magnetic flux dragged out into the heliosphere by the solar wind, varies by approximately a factor of 2 over the solar cycle. We consider the evolution of open solar flux in terms of a source and loss term. Open solar flux creation is likely to proceed at a rate dependent on the rate of photospheric flux emergence, which can be roughly parameterized by sunspot number or coronal mass ejection rate, when available. The open solar flux loss term is more difficult to relate to an observable parameter. The supersonic nature of the solar wind means open solar flux can only be removed by near-Sun magnetic reconnection between open solar magnetic field lines, be they open or closed heliospheric field lines. In this study we reconstruct open solar flux over the last three solar cycles and demonstrate that the loss term may be related to the degree to which the heliospheric current sheet (HCS) is warped, i.e., locally tilted from the solar rotation direction. This can account for both the large dip in open solar flux at the time of sunspot maximum as well as the asymmetry in open solar flux during the rising and declining phases of the solar cycle. The observed cycle-to-cycle variability is also well matched. Following Sheeley et al. (2001), we attribute modulation of open solar flux by the degree of warp of the HCS to the rate at which opposite polarity open solar flux is brought together by differential rotation.
Resumo:
Older adult computer users often lose track of the mouse cursor and so resort to methods such as mouse shaking or searching the screen to find the cursor again. Hence, this paper describes how a standard optical mouse was modified to include a touch sensor, activated by releasing and touching the mouse, which automatically centers the mouse cursor to the screen, potentially making it easier to find a 'lost' cursor. Six older adult computer users and six younger computer users were asked to compare the touch sensitive mouse with cursor centering with two alternative techniques for locating the mouse cursor: manually shaking the mouse and using the Windows sonar facility. The time taken to click on a target following a distractor task was recorded, and results show that centering the mouse was the fastest to use, with a 35% improvement over shaking the mouse. Five out of six older participants ranked the touch sensitive mouse with cursor centering as the easiest to use.
Resumo:
Dynamic multi-user interactions in a single networked virtual environment suffer from abrupt state transition problems due to communication delays arising from network latency--an action by one user only becoming apparent to another user after the communication delay. This results in a temporal suspension of the environment for the duration of the delay--the virtual world `hangs'--followed by an abrupt jump to make up for the time lost due to the delay so that the current state of the virtual world is displayed. These discontinuities appear unnatural and disconcerting to the users. This paper proposes a novel method of warping times associated with users to ensure that each user views a continuous version of the virtual world, such that no hangs or jumps occur despite other user interactions. Objects passed between users within the environment are parameterized, not by real time, but by a virtual local time, generated by continuously warping real time. This virtual time periodically realigns itself with real time as the virtual environment evolves. The concept of a local user dynamically warping the local time is also introduced. As a result, the users are shielded from viewing discontinuities within their virtual worlds, consequently enhancing the realism of the virtual environment.
Resumo:
The collection of wind speed time series by means of digital data loggers occurs in many domains, including civil engineering, environmental sciences and wind turbine technology. Since averaging intervals are often significantly larger than typical system time scales, the information lost has to be recovered in order to reconstruct the true dynamics of the system. In the present work we present a simple algorithm capable of generating a real-time wind speed time series from data logger records containing the average, maximum, and minimum values of the wind speed in a fixed interval, as well as the standard deviation. The signal is generated from a generalized random Fourier series. The spectrum can be matched to any desired theoretical or measured frequency distribution. Extreme values are specified through a postprocessing step based on the concept of constrained simulation. Applications of the algorithm to 10-min wind speed records logged at a test site at 60 m height above the ground show that the recorded 10-min values can be reproduced by the simulated time series to a high degree of accuracy.