867 resultados para one-time passwords


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Leafing phenology of two dry-forest sites on soils of different depth (S = shallow, D = deep) at Shipstern Reserve, Belize, were compared at the start of the rainy season (April-June 2000). Trees greater than or equal to 2.5 cm dbh were recorded weekly for 8 wk in three 0.04-ha plots per site. Ten species were analysed individually for their phenological patterns, of which the three most common were Bursera simaruba, Metopium brownei and Jatropha gaumeri. Trees were divided into those in the canopy (> 10 cm dbh) and the subcanopy (less than or equal to 10 cm dbh). Site S had larger trees on average than site D. The proportion of trees flushing leaves at any one time was generally higher in site S than in site D, for both canopy and subcanopy trees. Leaf flush started 2 wk earlier in site S than site D for subcanopy trees, but only 0.5 wk earlier for the canopy trees. Leaf flush duration was 1.5 wk longer in site S than site D. Large trees in the subcanopy flushed leaves earlier than small ones at both sites but in the canopy just at site D. Large trees flushed leaves earlier than small ones in three species and small trees flushed leaves more rapidly in two species. Bursera and Jatropha followed the general trends but Metopium, with larger trees in site D than site S, showed the converse with onset of flushing I wk earlier in site D than site S. Differences in response of the canopy and subcanopy trees on each site can be accounted for by the predominance of spring-flushing or stem-succulent species in site S and a tendency for evergreen species to occur in site D. Early flushing of relatively larger trees in site D most likely requires access to deeper soil water reserves but small and large trees utilize stored tree water in site S.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Currently, systemic immunosuppression is used in vascularized composite allotransplantation (VCA). This treatment has considerable side effects and reduces the quality of life of VCA recipients. We loaded the immunosuppressive drug tacrolimus into a self-assembled hydrogel, which releases the drug in response to proteolytic enzymes that are overexpressed during inflammation. A one-time local injection of the tacrolimus-laden hydrogel significantly prolonged graft survival in a Brown Norway-to-Lewis rat hindlimb transplantation model, leading to a median graft survival of >100 days compared to 33.5 days in tacrolimus only-treated recipients. Control groups with no treatment or hydrogel only showed a graft survival of 11 days. Histopathological evaluation, including anti-graft antibodies and complement C3, revealed significantly reduced immune responses in the tacrolimus-hydrogel group compared with tacrolimus only. In conclusion, a single-dose local injection of an enzyme-responsive tacrolimus-hydrogel is capable of preventing VCA rejection for >100 days in a rat model and may offer a new approach for immunosuppression in VCA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neutropenia is probably the strongest known predisposition to infection with otherwise harmless environmental or microbiota-derived species. Because initial swarming of neutrophils at the site of infection occurs within minutes, rather than the hours required to induce "emergency granulopoiesis," the relevance of having high numbers of these cells available at any one time is obvious. We observed that germ-free (GF) animals show delayed clearance of an apathogenic bacterium after systemic challenge. In this article, we show that the size of the bone marrow myeloid cell pool correlates strongly with the complexity of the intestinal microbiota. The effect of colonization can be recapitulated by transferring sterile heat-treated serum from colonized mice into GF wild-type mice. TLR signaling was essential for microbiota-driven myelopoiesis, as microbiota colonization or transferring serum from colonized animals had no effect in GF MyD88(-/-)TICAM1(-/-) mice. Amplification of myelopoiesis occurred in the absence of microbiota-specific IgG production. Thus, very low concentrations of microbial Ags and TLR ligands, well below the threshold required for induction of adaptive immunity, sets the bone marrow myeloid cell pool size. Coevolution of mammals with their microbiota has probably led to a reliance on microbiota-derived signals to provide tonic stimulation to the systemic innate immune system and to maintain vigilance to infection. This suggests that microbiota changes observed in dysbiosis, obesity, or antibiotic therapy may affect the cross talk between hematopoiesis and the microbiota, potentially exacerbating inflammatory or infectious states in the host.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examines the effect of the Great Moderation on the relationship between U.S. output growth and its volatility over the period 1947 to 2006. First, we consider the possible effects of structural change in the volatility process. In so doing, we employ GARCH-M and ARCH-M specifications of the process describing output growth rate and its volatility with and without a one-time structural break in volatility. Second, our data analyses and empirical results suggest no significant relationship between the output growth rate and its volatility, favoring the traditional wisdom of dichotomy in macroeconomics. Moreover, the evidence shows that the time-varying variance falls sharply or even disappears once we incorporate a one-time structural break in the unconditional variance of output starting 1982 or 1984. That is, the integrated GARCH effect proves spurious. Finally, a joint test of a trend change and a one-time shift in the volatility process finds that the one-time shift dominates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study has the purpose of determining the barriers and facilitators to nurses' acceptance of the Johnson and Johnson Protectiv®* Plus IV catheter safety needle device and implications for needlestick injuries at St. Luke's Episcopal Hospital, Houston, Texas. A one-time cross-sectional survey of 620 responding nurses was conducted by this researcher during December, 2000. The study objectives were to: (1) describe the perceived (a) organizational and individual barriers and facilitators and (b) acceptance of implementation of the IV catheter device; (2) examine the relative importance of these predictors; (3) describe (a) perceived changes in needlestick injuries after implementation of the device; (b) the reported incidence of injuries; and (c) the extent of underreporting by nurses; and (4) examine the relative importance of (a) the preceding predictors and (b) acceptance of the device in predicting perceived changes in needlestick injuries. Safety climate and training were evaluated as organizational factors. Individual factors evaluated were experience with the device, including time using it and frequency of use, and background information, including nursing unit, and length of time as a nurse in this hospital and in total nursing career. The conceptual framework was based upon the safety climate model. Descriptive statistics and multiple and logistic regression were utilized to address the study objectives. ^ The findings showed widespread acceptance of the device and a strong perception that it reduced the number of needlesticks. Acceptance was notably predicted by adequate training, appropriate time between training and device use, solid safety climate, and short length of service, in that order. A barrier to acceptance was nurses' longtime of use of previous needle technologies. Over four-fifths of nurses were compliant in always using the device. Compliance had two facilitators: length of time using device and, to a lesser extent, safety climate. Rates of compliance tended to be lower among nurses in units in which the device was frequently used. ^ High quality training and an atmosphere of caring about nurse safety stand out as primary facilitators that other institutions would need to adopt in order to achieve maximum success in implementing safety programs involving utilization of new safety devices. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Indoor Air Quality (IAQ) can have significant implications for health, productivity, job performance, and operating cost. Professional experience in the field of indoor air quality suggests that high expectations (better than nationally established standards) (American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE)) of workplace indoor air quality lead to increase air quality complaints. To determine whether there is a positive association between expectations and indoor air quality complaints, a one-time descriptive and analytical cross-sectional pilot study was conducted. Area Safety Liaisons (n = 330) at University of Texas Health Science Center – Houston were asked to answer a questionnaire regarding their expectations of four workplace indoor air quality indicators i.e., (temperature, relative humidity, carbon dioxide, and carbon monoxide) and if they experienced and reported indoor air quality problems. A chi-square test for independence was used to evaluate associations among the variables of interest. The response rate was 54% (n = 177). Results did not show significant associations between expectation and indoor air quality. However, a greater proportion of Area Safety Liaisons who expected indoor air quality indicators to be better than the established standard experienced greater indoor air quality problems. Similarly, a slightly higher proportion of Area Liaisons who expected indoor air quality indicators to be better than the standard reported greater indoor air quality complaints. ^ The findings indicated that a greater proportion of Area Safety Liaisons with high expectations (conditions that are beyond what is considered normal and acceptable by ASHRAE) experienced greater indoor air quality discomfort. This result suggests a positive association between high expectations and experienced and reported indoor air quality complaints. Future studies may be able to address whether the frequency of complaints and resulting investigations can be reduced through information and education about what are acceptable conditions.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

DSDP Hole 504B is the deepest section drilled into oceanic basement, penetrating through a 571.5-m lava pile and a 209-m transition zone of lavas and dikes into 295 m of a sheeted dike complex. To define the basement composition 194 samples of least altered basalts, representing all lithologic units, were analyzed for their major and 26 trace elements. As is evident from the alteration-sensitive indicators H2O+, CO2, S, K, Mn, Zn, Cu, and the iron oxidation ratio, all rocks recovered are chemically altered to some extent. Downhole variation in these parameters enables us to distinguish five depth-related alteration zones that closely correlate with changes in alteration mineralogy. Alteration in the uppermost basement portion is characterized by pronounced K-uptake, sulfur loss, and iron oxidation and clearly demonstrates low-temperature seawater interaction. A very spectacular type of alteration is confined to the depth range from 910 to 1059 m below seafloor (BSF). Rocks from this basement portion exhibit the lowest iron oxidation, the highest H2O+ contents, and a considerable enrichment in Mn, S, Zn, and Cu. At the top of this zone a stockwork-like sulfide mineralization occurs. The chemical data suggest that this basement portion was at one time within a hydrothermal upflow zone. The steep gradient in alteration chemistry above this zone and the ore precipitation are interpreted as the result of mixing of the upflowing hydrothermal fluids with lower-temperature solutions circulating in the lava pile. Despite the chemical alteration the primary composition and variation of the rocks can be reliably established. All data demonstrate that the pillow lavas and the dikes are remarkably uniform and display almost the same range of variation. A general characteristic of the rocks that classify as olivine tholeiites is their high MgO contents (up to 10.5 wt.%) and their low K abundances (-200 ppm). According to their mg-values, which range from 0.60 to 0.74, most basalts appear to have undergone some high-level crystal fractionation. Despite the overall similarity in composition, there are two major basalt groups that have significantly different abundances and ratios of incompatible elements at similar mg-values. The majority of the basalts from the pillow lava and dike sections are chemically closely related, and most probably represent differentiation products of a common parental magma. They are low in Na2O, TiO2, and P2O5, and very low in the more hygromagmaphile elements. Interdigitated with this basalt group is a very rarely occurring basalt that is higher in Na2O, TiO2, P2O5, much less depleted in hygromagmaphile elements, and similar to normal mid-ocean ridge basalt (MORB). The latter is restricted to Lithologic Units 5 and 36 of the pillow lava section and Lithologic Unit 83 of the dike section. The two basalt groups cannot be related by differentiation processes but have to be regarded as products of two different parental magmas. The compositional uniformity of the majority of the basalts suggests that the magma chamber beneath the Costa Rica Rift reached nearly steady-state conditions. However, the presence of lavas and dikes that crystallized from a different parental magma requires the existence of a separate conduit-magma chamber system for these melts. Occasionally mixing between the two magma types appears to have occurred. The chemical characteristics of the two magma types imply some heterogeneity in the mantle source underlying the Costa Rica Rift. The predominant magma type represents an extremely depleted source, whereas the rare magma type presumably originated from regions of less depleted mantle material (relict or affected by metasomatism).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our record of Younger Dryas intermediate-depth seawater D14C from North Atlantic deep-sea corals supports a link between abrupt climate change and intermediate ocean variability. Our data show that northern source intermediate water (~1700 m) was partially replaced by 14C-depleted southern source water at the onset of the event, consistent with a reduction in the rate of North Atlantic Deep Water formation. This transition requires the existence of large, mobile gradients of D14C in the ocean during the Younger Dryas. The D14C water column profile from Keigwin (2004) provides direct evidence for the presence of one such gradient at the beginning of the Younger Dryas (~12.9 ka), with a 100 per mil offset between shallow (<~2400 m) and deep water. Our early Younger Dryas data are consistent with this profile and also show a D14C inversion, with 35 per mil more enriched water at ~2400 m than at ~1700 m. This feature is probably the result of mixing between relatively well 14C ventilated northern source water and more poorly 14C ventilated southern source intermediate water, which is slightly shallower. Over the rest of the Younger Dryas our intermediate water/deepwater coral D14C data gradually increase, while the atmosphere D14C drops. For a very brief interval at ~12.0 ka and at the end of the Younger Dryas (11.5 ka), intermediate water D14C (~1200 m) approached atmospheric D14C. These enriched D14C results suggest an enhanced initial D14C content of the water and demonstrate the presence of large lateral D14C gradients in the intermediate/deep ocean in addition to the sharp vertical shift at ~2500 m. The transient D14C enrichment at ~12.0 ka occurred in the middle of the Younger Dryas and demonstrates that there is at least one time when the intermediate/deep ocean underwent dramatic change but with much smaller effects in other paleoclimatic records.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We compared the responses of native and non-native populations of the seaweed Gracilaria vermiculophylla to heat shock in common garden-type experiments. Specimens from six native populations in East Asia and from eight non-native populations in Europe and on the Mexican Pacific coast were acclimated to two sets of identical conditions before their resistance to heat shock was examined. The experiments were carried out twice - one time in the native range in Qingdao, China and one time in the invaded range in Kiel, Germany - to rule out effects of specific local conditions. In both testing sites the non-native populations survived heat shock significantly better than the native populations, The data underlying this statement are presented in https://doi.pangaea.de/10.1594/PANGAEA.859335. After three hours of heat shock G. vermiculophylla exhibited increased levels of heat shock protein 70 (HSP70) and of a specific isoform of haloperoxidase, suggesting that both enzymes could be required for heat shock stress management. However, the elevated resistance toward heat shock of non-native populations only correlated with an increased constitutive expression of heat shock protein 70 (HSP70). The haloperoxidase isoform was more prominent in native populations, suggesting that not only increased HSP70 expression, but also reduced allocation into haloperoxidase expression after heat shock was selected during the invasion history. The data describing expression of HSP70 and three different isoforms of haloperoxidase are presented in https://doi.pangaea.de/10.1594/PANGAEA.859358.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Effective static analyses have been proposed which allow inferring functions which bound the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and such bounds have been shown useful in a number of applications, such as granularity control in parallel execution. On the other hand, in certain distributed computation scenarios where different platforms come into play, with each platform having different capabilities, it is more interesting to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution time. With this objective in mind, we propose a method which allows inferring upper and lower bounds on the execution times of procedures of a program in a given execution platform. The approach combines compile-time cost bounds analysis with a one-time profiling of the platform in order to determine the values of certain constants for that platform. These constants calibrate a cost model which from then on is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

IP Multimedia Subsystem (IMS) is considered to provide multimedia services to users through an IP-based control plane. The current IMS service invocation mechanism, however, requires the Serving-Call Session Control Function (S-CSCF) invokes each Application Server (AS) sequentially to perform service subscription pro?le, which results in the heavy load of the S-CSCF and the long session set-up delay. To solve this issue, this paper proposes a linear chained service invocation mechanism to invoke each AS consecutively. By checking all the initial Filter Criteria (iFC) one-time and adding the addresses of all involved ASs to the ?Route? header, this new approach enables multiple services to be invoked as a linear chain during a session. We model the service invocation mechanisms through Jackson networks, which are validated through simulations. The analytic results verify that the linear chained service invocation mechanism can effectively reduce session set-up delay of the service layer and decrease the load level of the S-CSCF

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last few years there has been a heightened interest in data treatment and analysis with the aim of discovering hidden knowledge and eliciting relationships and patterns within this data. Data mining techniques (also known as Knowledge Discovery in Databases) have been applied over a wide range of fields such as marketing, investment, fraud detection, manufacturing, telecommunications and health. In this study, well-known data mining techniques such as artificial neural networks (ANN), genetic programming (GP), forward selection linear regression (LR) and k-means clustering techniques, are proposed to the health and sports community in order to aid with resistance training prescription. Appropriate resistance training prescription is effective for developing fitness, health and for enhancing general quality of life. Resistance exercise intensity is commonly prescribed as a percent of the one repetition maximum. 1RM, dynamic muscular strength, one repetition maximum or one execution maximum, is operationally defined as the heaviest load that can be moved over a specific range of motion, one time and with correct performance. The safety of the 1RM assessment has been questioned as such an enormous effort may lead to muscular injury. Prediction equations could help to tackle the problem of predicting the 1RM from submaximal loads, in order to avoid or at least, reduce the associated risks. We built different models from data on 30 men who performed up to 5 sets to exhaustion at different percentages of the 1RM in the bench press action, until reaching their actual 1RM. Also, a comparison of different existing prediction equations is carried out. The LR model seems to outperform the ANN and GP models for the 1RM prediction in the range between 1 and 10 repetitions. At 75% of the 1RM some subjects (n = 5) could perform 13 repetitions with proper technique in the bench press action, whilst other subjects (n = 20) performed statistically significant (p < 0:05) more repetitions at 70% than at 75% of their actual 1RM in the bench press action. Rate of perceived exertion (RPE) seems not to be a good predictor for 1RM when all the sets are performed until exhaustion, as no significant differences (p < 0:05) were found in the RPE at 75%, 80% and 90% of the 1RM. Also, years of experience and weekly hours of strength training are better correlated to 1RM (p < 0:05) than body weight. O'Connor et al. 1RM prediction equation seems to arise from the data gathered and seems to be the most accurate 1RM prediction equation from those proposed in literature and used in this study. Epley's 1RM prediction equation is reproduced by means of data simulation from 1RM literature equations. Finally, future lines of research are proposed related to the problem of the 1RM prediction by means of genetic algorithms, neural networks and clustering techniques. RESUMEN En los últimos años ha habido un creciente interés en el tratamiento y análisis de datos con el propósito de descubrir relaciones, patrones y conocimiento oculto en los mismos. Las técnicas de data mining (también llamadas de \Descubrimiento de conocimiento en bases de datos\) se han aplicado consistentemente a lo gran de un gran espectro de áreas como el marketing, inversiones, detección de fraude, producción industrial, telecomunicaciones y salud. En este estudio, técnicas bien conocidas de data mining como las redes neuronales artificiales (ANN), programación genética (GP), regresión lineal con selección hacia adelante (LR) y la técnica de clustering k-means, se proponen a la comunidad del deporte y la salud con el objetivo de ayudar con la prescripción del entrenamiento de fuerza. Una apropiada prescripción de entrenamiento de fuerza es efectiva no solo para mejorar el estado de forma general, sino para mejorar la salud e incrementar la calidad de vida. La intensidad en un ejercicio de fuerza se prescribe generalmente como un porcentaje de la repetición máxima. 1RM, fuerza muscular dinámica, una repetición máxima o una ejecución máxima, se define operacionalmente como la carga máxima que puede ser movida en un rango de movimiento específico, una vez y con una técnica correcta. La seguridad de las pruebas de 1RM ha sido cuestionada debido a que el gran esfuerzo requerido para llevarlas a cabo puede derivar en serias lesiones musculares. Las ecuaciones predictivas pueden ayudar a atajar el problema de la predicción de la 1RM con cargas sub-máximas y son empleadas con el propósito de eliminar o al menos, reducir los riesgos asociados. En este estudio, se construyeron distintos modelos a partir de los datos recogidos de 30 hombres que realizaron hasta 5 series al fallo en el ejercicio press de banca a distintos porcentajes de la 1RM, hasta llegar a su 1RM real. También se muestra una comparación de algunas de las distintas ecuaciones de predicción propuestas con anterioridad. El modelo LR parece superar a los modelos ANN y GP para la predicción de la 1RM entre 1 y 10 repeticiones. Al 75% de la 1RM algunos sujetos (n = 5) pudieron realizar 13 repeticiones con una técnica apropiada en el ejercicio press de banca, mientras que otros (n = 20) realizaron significativamente (p < 0:05) más repeticiones al 70% que al 75% de su 1RM en el press de banca. El ínndice de esfuerzo percibido (RPE) parece no ser un buen predictor del 1RM cuando todas las series se realizan al fallo, puesto que no existen diferencias signifiativas (p < 0:05) en el RPE al 75%, 80% y el 90% de la 1RM. Además, los años de experiencia y las horas semanales dedicadas al entrenamiento de fuerza están más correlacionadas con la 1RM (p < 0:05) que el peso corporal. La ecuación de O'Connor et al. parece surgir de los datos recogidos y parece ser la ecuación de predicción de 1RM más precisa de aquellas propuestas en la literatura y empleadas en este estudio. La ecuación de predicción de la 1RM de Epley es reproducida mediante simulación de datos a partir de algunas ecuaciones de predicción de la 1RM propuestas con anterioridad. Finalmente, se proponen futuras líneas de investigación relacionadas con el problema de la predicción de la 1RM mediante algoritmos genéticos, redes neuronales y técnicas de clustering.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.