976 resultados para new procedure
Resumo:
A key feature of ‘TESOL Quarterly’, a leading journal in the world of TESOL/applied linguistics, is its ‘Forum’ section which invites ‘responses and rebuttals’ from readers to any of its articles. These ‘responses or rebuttals’ form the focus of this research. In the interchanges between readers reacting to earlier research articles in TESOL Quarterly and authors responding to the said reaction I – examine the texts for evidence of genre-driven structure, whether shared between both ‘reaction’ and ‘response’ sections, or peculiar to each section, and attempt to determine the precise nature of the intended communicative purpose in particular and the implications for academic debate in general. The intended contribution of this thesis is to provide an analysis of how authors of research articles and their critics pursue their efforts beyond the research article which precipitated these exchanges in order to be recognized by their discourse community as, in the terminology of Swales (1981:51), ‘Primary Knowers’. Awareness of any principled generic process identified in this thesis may be of significance to practitioners in the applied linguistics community in their quest to establish academic reputation and in their pursuit of professional development. These findings may also be of use in triggering productive community discussion as a result of the questions they raise concerning the present nature of academic debate. Looking beyond the construction and status of the texts themselves, I inquire into the kind of ideational and social organization such exchanges keep in place and examine an alternative view of interaction. This study breaks new ground in two major ways. To the best of my knowledge, it is the first exploration of a bipartite, intertextual structure laying claim to genre status. Secondly, in its recourse to the comments of the writers’ themselves rather than relying exclusively on the evidence of their texts, as is the case with most studies of genre, this thesis offers an expanded opportunity to discuss perhaps the most interesting aspects of genre analysis – the light it throws on social ends and the role of genre in determining the nature of current academic debate as it here emerges.
Resumo:
ProxiMAX randomisation achieves saturation mutagenesis of contiguous codons without degeneracy or bias. Offering an alternative to trinucleotide phosphoramidite chemistry, it uses nothing more sophisticated than unmodified oligonucleotides and standard molecular biology reagents and as such, requires no specialised chemistry, reagents nor equipment. When particular residues are known to affect protein activity/specificity, their combinatorial replacement with all 20 amino acids, or a subset thereof, can provide a rapid route to generating proteins with desirable characteristics. Conventionally, saturation mutagenesis replaced key codons with degenerate ones. Although simple to perform, that procedure resulted in unnecessarily large libraries, termination codons and inherent uneven amino acid representation. ProxiMAX randomisation is an enzyme-based technique that can encode unbiased representation of all or selected amino acids or else can provide required codons in pre-defined ratios. Each saturated position can be defined independently of the others. ProxiMAX randomisation is achieved via saturation cycling: an iterative process comprising blunt end ligation, amplification and digestion with a Type IIS restriction enzyme. We demonstrate both unbiased saturation of a short 6-mer peptide and saturation of a hypervariable region of a scfv antibody fragment, where 11 contiguous codons are saturated with selected codons, in pre-defined ratios. As such, ProxiMAX randomisation is particularly relevant to antibody engineering. The development of ProxiMAX randomisation from concept to reality is described.
Resumo:
Respiratory-volume monitoring is an indispensable part of mechanical ventilation. Here we present a new method of the respiratory-volume measurement based on a single fibre-optical long-period sensor of bending and the correlation between torso curvature and lung volume. Unlike the commonly used air-flow based measurement methods the proposed sensor is drift-free and immune to air-leaks. In the paper, we explain the working principle of sensors, a two-step calibration-test measurement procedure and present results that establish a linear correlation between the change in the local thorax curvature and the change of the lung volume. We also discuss the advantages and limitations of these sensors with respect to the current standards. © 2013 IEEE.
Resumo:
As is well known, the Convergence Theorem for the Recurrent Neural Networks, is based in Lyapunov ́s second method, which states that associated to any one given net state, there always exist a real number, in other words an element of the one dimensional Euclidean Space R, in such a way that when the state of the net changes then its associated real number decreases. In this paper we will introduce the two dimensional Euclidean space R2, as the space associated to the net, and we will define a pair of real numbers ( x, y ) , associated to any one given state of the net. We will prove that when the net change its state, then the product x ⋅ y will decrease. All the states whose projection over the energy field are placed on the same hyperbolic surface, will be considered as points with the same energy level. On the other hand we will prove that if the states are classified attended to their distances to the zero vector, only one pattern in each one of the different classes may be at the same energy level. The retrieving procedure is analyzed trough the projection of the states on that plane. The geometrical properties of the synaptic matrix W may be used for classifying the n-dimensional state- vector space in n classes. A pattern to be recognized is seen as a point belonging to one of these classes, and depending on the class the pattern to be retrieved belongs, different weight parameters are used. The capacity of the net is improved and the spurious states are reduced. In order to clarify and corroborate the theoretical results, together with the formal theory, an application is presented.
Resumo:
In this work the new pattern recognition method based on the unification of algebraic and statistical approaches is described. The main point of the method is the voting procedure upon the statistically weighted regularities, which are linear separators in two-dimensional projections of feature space. The report contains brief description of the theoretical foundations of the method, description of its software realization and the results of series of experiments proving its usefulness in practical tasks.
Resumo:
Internal quantum efficiency (IQE) of a high-brightness blue LED has been evaluated from the external quantum efficiency measured as a function of current at room temperature. Processing the data with a novel evaluation procedure based on the ABC-model, we have determined separately IQE of the LED structure and light extraction efficiency (LEE) of UX:3 chip. Full text Nowadays, understanding of LED efficiency behavior at high currents is quite critical to find ways for further improvement of III-nitride LED performance [1]. External quantum efficiency ηe (EQE) provides integral information on the recombination and photon emission processes in LEDs. Meanwhile EQE is the product of IQE ηi and LEE ηext at negligible carrier leakage from the active region. Separate determination of IQE and LEE would be much more helpful, providing correlation between these parameters and specific epi-structure and chip design. In this paper, we extend the approach of [2,3] to the whole range of the current/optical power variation, providing an express tool for separate evaluation of IQE and LEE. We studied an InGaN-based LED fabricated by Osram OS. LED structure grown by MOCVD on sapphire substrate was processed as UX:3 chip and mounted into the Golden Dragon package without molding. EQE was measured with Labsphere CDS-600 spectrometer. Plotting EQE versus output power P and finding the power Pm corresponding to EQE maximum ηm enables comparing the measurements with the analytical relationships ηi = Q/(Q+p1/2+p-1/2) ,p = P/Pm , and Q = B/(AC) 1/2 where A, Band C are recombination constants [4]. As a result, maximum IQE value equal to QI(Q+2) can be found from the ratio ηm/ηe plotted as a function of p1/2 +p1-1/2 (see Fig.la) and then LEE calculated as ηext = ηm (Q+2)/Q . Experimental EQE as a function of normalized optical power p is shown in Fig. 1 b along with the analytical approximation based on the ABCmodel. The approximation fits perfectly the measurements in the range of the optical power (or operating current) variation by eight orders of magnitude. In conclusion, new express method for separate evaluation of IQE and LEE of III-nitride LEDs is suggested and applied to characterization of a high-brightness blue LED. With this method, we obtained LEE from the free chip surface to the air as 69.8% and IQE as 85.7% at the maximum and 65.2% at the operation current 350 rnA. [I] G. Verzellesi, D. Saguatti, M. Meneghini, F. Bertazzi, M. Goano, G. Meneghesso, and E. Zanoni, "Efficiency droop in InGaN/GaN blue light-emitting diodes: Physical mechanisms and remedies," 1. AppL Phys., vol. 114, no. 7, pp. 071101, Aug., 2013. [2] C. van Opdorp and G. W. 't Hooft, "Method for determining effective non radiative lifetime and leakage losses in double-heterostructure lasers," 1. AppL Phys., vol. 52, no. 6, pp. 3827-3839, Feb., 1981. [3] M. Meneghini, N. Trivellin, G. Meneghesso, E. Zanoni, U. Zehnder, and B. Hahn, "A combined electro-optical method for the determination of the recombination parameters in InGaN-based light-emitting diodes," 1. AppL Phys., vol. 106, no. II, pp. 114508, Dec., 2009. [4] Qi Dai, Qifeng Shan, ling Wang, S. Chhajed, laehee Cho, E. F. Schubert, M. H. Crawford, D. D. Koleske, Min-Ho Kim, and Yongjo Park, "Carrier recombination mechanisms and efficiency droop in GalnN/GaN light-emitting diodes," App/. Phys. Leu., vol. 97, no. 13, pp. 133507, Sept., 2010. © 2014 IEEE.
Resumo:
In today’s modern manufacturing industry there is an increasing need to improve internal processes to meet diverse client needs. Process re-engineering is an important activity that is well understood by industry but its rate of application within small to medium size enterprises (SME) is less developed. Business pressures shift the focus of SMEs toward winning new projects and contracts rather than developing long-term, sustainable manufacturing processes. Variations in manufacturing processes are inevitable, but the amount of non-conformity often exceeds the acceptable levels. This paper is focused on the re-engineering of the manufacturing and verification procedure for discrete parts production with the aim of enhancing process control and product verification. The ideologies of the ‘Push’ and ‘Pull’ approaches to manufacturing are useful in the context of process re-engineering for data improvement. Currently information is pulled from the market and prominent customers, and manufacturing companies always try to make the right product, by following customer procedures that attempt to verify against specifications. This approach can result in significant quality control challenges. The aim of this paper is to highlight the importance of process re-engineering in product verification in SMEs. Leadership, culture, ownership and process management are among the main attributes required for the successful deployment of process re-engineering. This paper presents the findings from a case study showcasing the application of a modified re-engingeering method for the manufacturing and verification process. The findings from the case study indicate there are several advantages to implementing the re-engineering method outlined in this paper.
Resumo:
Purpose – The purpose of this paper is to outline a seven-phase simulation conceptual modelling procedure that incorporates existing practice and embeds a process reference model (i.e. SCOR). Design/methodology/approach – An extensive review of the simulation and SCM literature identifies a set of requirements for a domain-specific conceptual modelling procedure. The associated design issues for each requirement are discussed and the utility of SCOR in the process of conceptual modelling is demonstrated using two development cases. Ten key concepts are synthesised and aligned to a general process for conceptual modelling. Further work is outlined to detail, refine and test the procedure with different process reference models in different industrial contexts. Findings - Simulation conceptual modelling is often regarded as the most important yet least understood aspect of a simulation project (Robinson, 2008a). Even today, there has been little research development into guidelines to aid in the creation of a conceptual model. Design issues are discussed for building an ‘effective’ conceptual model and the domain-specific requirements for modelling supply chains are addressed. The ten key concepts are incorporated to aid in describing the supply chain problem (i.e. components and relationships that need to be included in the model), model content (i.e. rules for determining the simplest model boundary and level of detail to implement the model) and model validation. Originality/value – Paper addresses Robinson (2008a) call for research in defining and developing new approaches for conceptual modelling and Manuj et al., (2009) discussion on improving the rigour of simulation studies in SCM. It is expected that more detailed guidelines will yield benefits to both expert (i.e. avert typical modelling failures) and novice modellers (i.e. guided practice; less reliance on hopeful intuition)
Resumo:
Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.
Resumo:
The aim of this study was to present a new methodology for evaluating the pelvic floor muscle (PFM) passive properties. The properties were assessed in 13 continent women using an intra-vaginal dynamometric speculum and EMG (to ensure the subjects were relaxed) in four different conditions: (1) forces recorded at minimal aperture (initial passive resistance); (2) passive resistance at maximal aperture; (3) forces and passive elastic stiffness (PES) evaluated during five lengthening and shortening cycles; and (4) percentage loss of resistance after 1 min of sustained stretch. The PFMs and surrounding tissues were stretched, at constant speed, by increasing the vaginal antero-posterior diameter; different apertures were considered. Hysteresis was also calculated. The procedure was deemed acceptable by all participants. The median passive forces recorded ranged from 0.54 N (interquartile range 1.52) for minimal aperture to 8.45 N (interquartile range 7.10) for maximal aperture while the corresponding median PES values were 0.17 N/mm (interquartile range 0.28) and 0.67 N/mm (interquartile range 0.60). Median hysteresis was 17.24 N∗mm (interquartile range 35.60) and the median percentage of force losses was 11.17% (interquartile range 13.33). This original approach to evaluating the PFM passive properties is very promising for providing better insight into the patho-physiology of stress urinary incontinence and pinpointing conservative treatment mechanisms.
Resumo:
The aim of this study was to present a new methodology for evaluating the pelvic floor muscle (PFM) passive properties. The properties were assessed in 13 continent women using an intra-vaginal dynamometric speculum and EMG (to ensure the subjects were relaxed) in four different conditions: (1) forces recorded at minimal aperture (initial passive resistance); (2) passive resistance at maximal aperture; (3) forces and passive elastic stiffness (PES) evaluated during five lengthening and shortening cycles; and (4) percentage loss of resistance after 1 min of sustained stretch. The PFMs and surrounding tissues were stretched, at constant speed, by increasing the vaginal antero-posterior diameter; different apertures were considered. Hysteresis was also calculated. The procedure was deemed acceptable by all participants. The median passive forces recorded ranged from 0.54 N (interquartile range 1.52) for minimal aperture to 8.45 N (interquartile range 7.10) for maximal aperture while the corresponding median PES values were 0.17 N/mm (interquartile range 0.28) and 0.67 N/mm (interquartile range 0.60). Median hysteresis was 17.24 N∗mm (interquartile range 35.60) and the median percentage of force losses was 11.17% (interquartile range 13.33). This original approach to evaluating the PFM passive properties is very promising for providing better insight into the patho-physiology of stress urinary incontinence and pinpointing conservative treatment mechanisms.
Resumo:
A selective chemical photosynthesis inhibitor, DCMU (Dichorophenyl-dimethylurea), dissolved in DMSO (Dimethyl sulfoxide) was substituted for the dark incubation method commonly used to measure the oxygen consumption in metabolic and primary production studies. We compared oxygen fluxes during light incubations with DCMU and dark incubations procedure, on soft bottom benthos. For this purpose, we studied the effects of different DCMU concentrations. A concentration of 5 · 10-5 mol l-1 inside a clear incubation enclosure completely inhibits photosynthesis without affecting the metabolism of soft bottom benthos.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
An array of Bio-Argo floats equipped with radiometric sensors has been recently deployed in various open ocean areas representative of the diversity of trophic and bio-optical conditions prevailing in the so-called Case 1 waters. Around solar noon and almost everyday, each float acquires 0-250 m vertical profiles of Photosynthetically Available Radiation and downward irradiance at three wavelengths (380, 412 and 490 nm). Up until now, more than 6500 profiles for each radiometric channel have been acquired. As these radiometric data are collected out of operator’s control and regardless of meteorological conditions, specific and automatic data processing protocols have to be developed. Here, we present a data quality-control procedure aimed at verifying profile shapes and providing near real-time data distribution. This procedure is specifically developed to: 1) identify main issues of measurements (i.e. dark signal, atmospheric clouds, spikes and wave-focusing occurrences); 2) validate the final data with a hierarchy of tests to ensure a scientific utilization. The procedure, adapted to each of the four radiometric channels, is designed to flag each profile in a way compliant with the data management procedure used by the Argo program. Main perturbations in the light field are identified by the new protocols with good performances over the whole dataset. This highlights its potential applicability at the global scale. Finally, the comparison with modeled surface irradiances allows assessing the accuracy of quality-controlled measured irradiance values and identifying any possible evolution over the float lifetime due to biofouling and instrumental drift.
Resumo:
Objective: Caffeine has been shown to have effects on certain areas of cognition, but in executive functioning the research is limited and also inconsistent. One reason could be the need for a more sensitive measure to detect the effects of caffeine on executive function. This study used a new non-immersive virtual reality assessment of executive functions known as JEF© (the Jansari Assessment of Executive Function) alongside the ‘classic’ Stroop Colour- Word task to assess the effects of a normal dose of caffeinated coffee on executive function. Method: Using a double-blind, counterbalanced within participants procedure 43 participants were administered either a caffeinated or decaffeinated coffee and completed the ‘JEF©’ and Stroop tasks, as well as a subjective mood scale and blood pressure pre- and post condition on two separate occasions a week apart. JEF© yields measures for eight separate aspects of executive functions, in addition to a total average score. Results: Findings indicate that performance was significantly improved on the planning, creative thinking, event-, time- and action-based prospective memory, as well as total JEF© score following caffeinated coffee relative to the decaffeinated coffee. The caffeinated beverage significantly decreased reaction times on the Stroop task, but there was no effect on Stroop interference. Conclusion: The results provide further support for the effects of a caffeinated beverage on cognitive functioning. In particular, it has demonstrated the ability of JEF© to detect the effects of caffeine across a number of executive functioning constructs, which weren’t shown in the Stroop task, suggesting executive functioning improvements as a result of a ‘typical’ dose of caffeine may only be detected by the use of more real-world, ecologically valid tasks.