992 resultados para MEDICIONES DE TARGET STRENGTH


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interaction of an ultraintense laser pulse with a conical target is studied by means of numerical particle-in-cell simulations in the context of fast ignition. The divergence of the fast electron beam generated at the tip of the cone has been shown to be a crucial parameter for the efficient coupling of the ignition laser pulse to the precompressed fusion pellet. In this paper, we demonstrate that a focused hot electron beam is produced at the cone tip, provided that electron currents flowing along the surfaces of the cone sidewalls are efficiently generated. The influence of various interaction parameters over the formation of these wall currents is investigated. It is found that the strength of the electron flows is enhanced for high laser intensities, low density targets, and steep density gradients inside the cone. The hot electron energy distribution obeys a power law for energies of up to a few MeV, with the addition of a high-energy Maxwellian tail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of colloidal nanosilica on the fresh and rheological parameters, plastic shrinkage, heat of hydration, and compressive strength of cement-based grouts is investigated in this paper. The fresh and rheological properties were evaluated by the minislump flow, Marsh cone flow time, Lombardi plate cohesion meter, yield value, and plastic viscosity. The key parameters investigated were the dosages of nanosilica and superplasticizer and temperature of mixing water. Statistical models and isoresponse curves were developed to capture the significant trends. The dosage of nanosilica had a significant effect on the results. The increase in the dosage of nanosilica led to increasing the values of flow time, plate cohesion meter, yield stress, plastic viscosity, heat of hydration at 1 day and 3 days, and compressive strength at 1 day, while reducing the minislump, plastic shrinkage up 24 h, and compressive strength at 3, 7, and 28 days. Conversely, the increase in the dosage of superplasticizer led to decreasing the values of flow time, plate cohesion meter, yield stress, plastic viscosity, heat of hydration at 1 day and 3 days, and compressive strength at 1 day, while increasing the minislump, plastic shrinkage, and compressive strength at 3 and 7 days. Increasing the temperature of mixing water led to a notable increase in the results of minislump, flow time, plastic viscosity, heat of hydration at 3 days, and compressive strength at 1 day, while it reduced the plate cohesion, compressive strength at 3, 7, and 28 days. The statistical models developed in this study can facilitate optimizing the mixture proportions of grouts for target performance by reducing the number of trial batches needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photoemission optogalvanaic (POG) effect has been observed by irradiating copper target electrode, in a nitrogen discharge cell using 1.06 μm and frequency doubled 532 nm Nd:YAG laser pulse. Measurement of the nature of the variation of POG signal strength with 532 nm laser fluence confirms the two photon induced photoelectric emission from copper. However, using 1.06 μm laser pulses thermally assisted photoemission is observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Isometric grip strength, evaluated with a handgrip dynamometer, is a marker of current nutritional status and cardiometabolic risk and future morbidity and mortality. We present reference values for handgrip strength in healthy young Colombian adults (aged 18 to 29 years). Methods: The sample comprised 5.647 (2.330 men and 3.317 women) apparently healthy young university students (mean age, 20.6±2.7 years) attending public and private institutions in the cities of Bogota and Cali (Colombia). Handgrip strength was measured two times with a TKK analogue dynamometer in both hands and the highest value used in the analysis. Sex- and age-specific normative values for handgrip strength were calculated using the LMS method and expressed as tabulated percentiles from 3 to 97 and as smoothed centile curves (P3, P10, P25, P50, P75, P90 and P97). Results: Mean values for right and left handgrip strength were 38.1±8.9 and 35.9±8.6 kg for men, and 25.1±8.7 and 23.3±8.2 kg for women, respectively. Handgrip strength increased with age in both sexes and was significantly higher in men in all age categories. The results were generally more homogeneous amongst men than women. Conclusions: Sex- and age-specific handgrip strength normative values among healthy young Colombian adults are defined. This information may be helpful in future studies of secular trends in handgrip strength and to identify clinically relevant cut points for poor nutritional and elevated cardiometabolic risk in a Latin American population. Evidence of decline in handgrip strength before the end of the third decade is of concern and warrants further investigation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Strength training is usually associated with a reduction in fat mass and with muscle hypertrophy. The aim of the present study was to examine whether the serum free leptin index (FLI), measured by the molar excess of soluble leptin receptor (sOB-R) over leptin, is increased by 6 weeks of strength training. Eighteen male, physical education students were randomly assigned to two groups: a strength-training (n 12) and a control group (n 6). Body composition (lean body mass and body fat) determined by dual-energy X-ray absorptiometry (DXA), muscle performance and leptin, sOB-R, total testosterone and free testosterone concentrations were determined before and after training. Fat mass was reduced by 1 kg with strength training (P<0.05). Lean body mass of trained extremities was increased by 3% (P<0.05), while the concentration of free testosterone in serum was reduced by 17% (P<0.05) after training. However, despite the reduction in fat mass and free testosterone, serum leptin concentration was not significantly affected by strength training, even after accounting for the differences in body fat. By contrast, for a given fat mass, the sOB-R was increased by 13% (P<0.05) at the end of the strength-training programme, although the molar excess of sOB-R over leptin remained unchanged. Therefore, the quantity of free leptin available to bind to the target tissues was not significantly affected by the short strength-training programme, which elicited a 7% reduction in fat mass.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Leptin and osteocalcin play a role in the regulation of the fat-bone axis and may be altered by exercise. To determine whether osteocalcin reduces fat mass in humans fed ad libitum and if there is a sex dimorphism in the serum osteocalcin and leptin responses to strength training, we studied 43 male (age 23.9 2.4 yr, mean +/- SD) and 23 female physical education students (age 23.2 +/- 2.7 yr). Subjects were randomly assigned to two groups: training (TG) and control (CG). TG followed a strength combined with plyometric jumps training program during 9 wk, whereas the CG did not train. Physical fitness, body composition (dual-energy X-ray absorptiometry), and serum concentrations of hormones were determined pre- and posttraining. In the whole group of subjects (pretraining), the serum concentration of osteocalcin was positively correlated (r = 0.29-0.42, P < 0.05) with whole body and regional bone mineral content, lean mass, dynamic strength, and serum-free testosterone concentration (r = 0.32). However, osteocalcin was negatively correlated with leptin concentration (r = -0.37), fat mass (r = -0.31), and the percent body fat (r = -0.44). Both sexes experienced similar relative improvements in performance, lean mass (+4-5%), and whole body (+0.78%) and lumbar spine bone mineral content (+1.2-2%) with training. Serum osteocalcin concentration was increased after training by 45 and 27% in men and women, respectively (P < 0.05). Fat mass was not altered by training. Vastus lateralis type II MHC composition at the start of the training program predicted 25% of the osteocalcin increase after training. Serum leptin concentration was reduced with training in women. In summary, while the relative effects of strength training plus plyometric jumps in performance, muscle hypertrophy, and osteogenesis are similar in men and women, serum leptin concentration is reduced only in women. The osteocalcin response to strength training is, in part, modulated by the muscle phenotype (MHC isoform composition). Despite the increase in osteocalcin, fat mass was not reduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reprogramming of gene expression contributes to structural and functional adaptation of muscle tissue in response to altered use. The aim of this study was to investigate mechanisms for observed improvements in leg extension strength, gain in relative thigh muscle mass and loss of body and thigh fat content in response to eccentric and conventional strength training in elderly men (n = 14) and women (n = 14; average age of the men and women: 80.1 ± 3.7 years) by means of structural and molecular analyses. Biopsies were collected from m. vastus lateralis in the resting state before and after 12 weeks of training with two weekly resistance exercise sessions (RET) or eccentric ergometer sessions (EET). Gene expression was analyzed using custom-designed low-density PCR arrays. Muscle ultrastructure was evaluated using EM morphometry. Gain in thigh muscle mass was paralleled by an increase in muscle fiber cross-sectional area (hypertrophy) with RET but not with EET, where muscle growth is likely occurring by the addition of sarcomeres in series or by hyperplasia. The expression of transcripts encoding factors involved in muscle growth, repair and remodeling (e.g., IGF-1, HGF, MYOG, MYH3) was increased to a larger extent after EET than RET. MicroRNA 1 expression was decreased independent of the training modality, and was paralleled by an increased expression of IGF-1 representing a potential target. IGF-1 is a potent promoter of muscle growth, and its regulation by microRNA 1 may have contributed to the gain of muscle mass observed in our subjects. EET depressed genes encoding mitochondrial and metabolic transcripts. The changes of several metabolic and mitochondrial transcripts correlated significantly with changes in mitochondrial volume density. Intramyocellular lipid content was decreased after EET concomitantly with total body fat. Changes in intramyocellular lipid content correlated with changes in body fat content with both RET and EET. In the elderly, RET and EET lead to distinct molecular and structural adaptations which might contribute to the observed small quantitative differences in functional tests and body composition parameters. EET seems to be particularly convenient for the elderly with regard to improvements in body composition and strength but at the expense of reducing muscular oxidative capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximately one-third of US adults have metabolic syndrome, the clustering of cardiovascular risk factors that include hypertension, abdominal adiposity, elevated fasting glucose, low high-density lipoprotein (HDL)-cholesterol and elevated triglyceride levels. While the definition of metabolic syndrome continues to be much debated among leading health research organizations, the fact is that individuals with metabolic syndrome have an increased risk of developing cardiovascular disease and/or type 2 diabetes. A recent report by the Henry J. Kaiser Family Foundation found that the US spent $2.2 trillion (16.2% of the Gross Domestic Product) on healthcare in 2007 and cited that among other factors, chronic diseases, including type 2 diabetes and cardiovascular disease, are large contributors to this growing national expenditure. Bearing a substantial portion of this cost are employers, the leading providers of health insurance. In lieu of this, many employers have begun implementing health promotion efforts to counteract these rising costs. However, evidence-based practices, uniform guidelines and policy do not exist for this setting in regard to the prevention of metabolic syndrome risk factors as defined by the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III). Therefore, the aim of this review was to determine the effects of worksite-based behavior change programs on reducing the risk factors for metabolic syndrome in adults. Using relevant search terms, OVID MEDLINE was used to search the peer-reviewed literature published since 1998, resulting in 23 articles meeting the inclusion criteria for the review. The American Dietetic Association's Evidence Analysis Process was used to abstract data from selected articles, assess the quality of each study, compile the evidence, develop a summarized conclusion, and assign a grade based upon the strength of supporting evidence. The results revealed that participating in a worksite-based behavior change program may be associated in one or more improved metabolic syndrome risk factors. Programs that delivered a higher dose (>22 hours), in a shorter duration (<2 years) using two or more behavior-change strategies were associated with more metabolic risk factors being positively impacted. A Conclusion Grade of III was obtained for the evidence, indicating that studies were of weak design or results were inconclusive due to inadequate sample sizes, bias and lack of generalizability. These results provide some support for the continued use of worksite-based health promotion and further research is needed to determine if multi-strategy, intense behavior change programs targeting multiple risk factors are able to sustain health improvements in the long-term.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks are posed as the new communication paradigm where the use of small, low-complexity, and low-power devices is preferred over costly centralized systems. The spectra of potential applications of sensor networks is very wide, ranging from monitoring, surveillance, and localization, among others. Localization is a key application in sensor networks and the use of simple, efficient, and distributed algorithms is of paramount practical importance. Combining convex optimization tools with consensus algorithms we propose a distributed localization algorithm for scenarios where received signal strength indicator readings are used. We approach the localization problem by formulating an alternative problem that uses distance estimates locally computed at each node. The formulated problem is solved by a relaxed version using semidefinite relaxation technique. Conditions under which the relaxed problem yields to the same solution as the original problem are given and a distributed consensusbased implementation of the algorithm is proposed based on an augmented Lagrangian approach and primaldual decomposition methods. Although suboptimal, the proposed approach is very suitable for its implementation in real sensor networks, i.e., it is scalable, robust against node failures and requires only local communication among neighboring nodes. Simulation results show that running an additional local search around the found solution can yield performance close to the maximum likelihood estimate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El auge que ha surgido en los últimos años por la reparación de edificios y estructuras construidas con hormigón ha llevado al desarrollo de morteros de reparación cada vez más tecnológicos. En el desarrollo de estos morteros por parte de los fabricantes, surge la disyuntiva en el uso de los polímeros en sus formulaciones, por no encontrarse justificado en ocasiones el trinomio prestaciones/precio/aplicación. En esta tesis se ha realizado un estudio exhaustivo para la justificación de la utilización de estos morteros como morteros de reparación estructural como respuesta a la demanda actual disponiéndolo en tres partes: En la primera parte se realizó un estudio del arte de los morteros y sus constituyentes. El uso de los morteros se remonta a la antigüedad, utilizándose como componentes yeso y cal fundamentalmente. Los griegos y romanos desarrollaron el concepto de morteros de cal, introduciendo componentes como las puzolanas, cales hidraúlicas y áridos de polvo de mármol dando origen a morteros muy parecidos a los hormigones actuales. En la edad media y renacimiento se perdió la tecnología desarrollada por los romanos debido al extenso uso de la piedra en las construcciones civiles, defensivas y religiosas. Hubo que esperar hasta el siglo XIX para que J. Aspdin descubriese el actual cemento como el principal compuesto hidraúlico. Por último y ya en el siglo XX con la aparición de moléculas tales como estireno, melanina, cloruro de vinilo y poliésteres se comenzó a desarrollar la industria de los polímeros que se añadieron a los morteros dando lugar a los “composites”. El uso de polímeros en matrices cementantes dotan al mortero de propiedades tales como: adherencia, flexibilidad y trabajabilidad, como ya se tiene constancia desde los años 30 con el uso de caucho naturales. En la actualidad el uso de polímeros de síntesis (polivinialacetato, estireno-butadieno, viniacrílico y resinas epoxi) hacen que principalmente el mortero tenga mayor resistencia al ataque del agua y por lo tanto aumente su durabilidad ya que se minimizan todas las reacciones de deterioro (hielo, humedad, ataque biológico,…). En el presente estudio el polímero que se utilizó fue en estado polvo: polímero redispersable. Estos polímeros están encapsulados y cuando se ponen en contacto con el agua se liberan de la cápsula formando de nuevo el gel. En los morteros de reparación el único compuesto hidraúlico que hay es el cemento y es el principal constituyente hoy en día de los materiales de construcción. El cemento se obtiene por molienda conjunta de Clínker y yeso. El Clínker se obtiene por cocción de una mezcla de arcillas y calizas hasta una temperatura de 1450-1500º C por reacción en estado fundente. Para esta reacción se deben premachacar y homogeneizar las materias primas extraídas de la cantera. Son dosificadas en el horno con unas proporciones tales que cumplan con unas relación de óxidos tales que permitan formar las fases anhidras del Clínker C3S, C2S, C3A y C4AF. De la hidratación de las fases se obtiene el gel CSH que es el que proporciona al cemento de sus propiedades. Existe una norma (UNE-EN 197-1) que establece la composición, especificaciones y tipos de cementos que se fabrican en España. La tendencia actual en la fabricación del cemento pasa por el uso de cementos con mayores contenidos de adiciones (cal, puzolana, cenizas volantes, humo de sílice,…) con el objeto de obtener cementos más sostenibles. Otros componentes que influyen en las características de los morteros son: - Áridos. En el desarrollo de los morteros se suelen usar naturales, bien calizos o silícicos. Hacen la función de relleno y de cohesionantes de la matriz cementante. Deben ser inertes - Aditivos. Son aquellos componentes del mortero que son dosificados en una proporción menor al 5%. Los más usados son los superplastificantes por su acción de reductores de agua que revierte en una mayor durabilidad del mortero. Una vez analizada la composición de los morteros, la mejora tecnológica de los mismos está orientada al aumento de la durabilidad de su vida en obra. La durabilidad se define como la capacidad que éste tiene de resistir a la acción del ambiente, ataques químicos, físicos, biológicos o cualquier proceso que tienda a su destrucción. Estos procesos dependen de factores tales como la porosidad del hormigón y de la exposición al ambiente. En cuanto a la porosidad hay que tener en cuenta la distribución de macroporos, mesoporos y microporos de la estructura del hormigón, ya que no todos son susceptibles de que se produzca el transporte de agentes deteriorantes, provocando tensiones internas en las paredes de los mismos y destruyendo la matriz cementante Por otro lado los procesos de deterioro están relacionados con la acción del agua bien como agente directo o como vehículo de transporte del agente deteriorante. Un ambiente que resulta muy agresivo para los hormigones es el marino. En este caso los procesos de deterioro están relacionados con la presencia de cloruros y de sulfatos tanto en el agua de mar como en la atmosfera que en combinación con el CO2 y O2 forman la sal de Friedel. El deterioro de las estructuras en ambientes marinos se produce por la debilitación de la matriz cementante y posterior corrosión de las armaduras que provocan un aumento de volumen en el interior y rotura de la matriz cementante por tensiones capilares. Otras reacciones que pueden producir estos efectos son árido-álcali y difusión de iones cloruro. La durabilidad de un hormigón también depende del tipo de cemento y su composición química (cementos con altos contenidos de adición son más resistentes), relación agua/cemento y contenido de cemento. La Norma UNE-EN 1504 que consta de 10 partes, define los productos para la protección y reparación de estructuras de hormigón, el control de calidad de los productos, propiedades físico-químicas y durables que deben cumplir. En esta Norma se referencian otras 65 normas que ofrecen los métodos de ensayo para la evaluación de los sistemas de reparación. En la segunda parte de esta Tesis se hizo un diseño de experimentos con diferentes morteros poliméricos (con concentraciones de polímero entre 0 y 25%), tomando como referencia un mortero control sin polímero, y se estudiaron sus propiedades físico-químicas, mecánicas y durables. Para mortero con baja proporción de polímero se recurre a sistemas monocomponentes y para concentraciones altas bicomponentes en la que el polímero está en dispersión acuosa. Las propiedades mecánicas medidas fueron: resistencia a compresión, resistencia a flexión, módulo de elasticidad, adherencia por tracción directa y expansión-retracción, todas ellas bajo normas UNE. Como ensayos de caracterización de la durabilidad: absorción capilar, resistencia a carbonatación y adherencia a tracción después de ciclos hielo-deshielo. El objeto de este estudio es seleccionar el mortero con mejor resultado general para posteriormente hacer una comparativa entre un mortero con polímero (cantidad optimizada) y un mortero sin polímero. Para seleccionar esa cantidad óptima de polímero a usar se han tenido en cuenta los siguientes criterios: el mortero debe tener una clasificación R4 en cuanto a prestaciones mecánicas al igual que para evaluar sus propiedades durables frente a los ciclos realizados, siempre teniendo en cuenta que la adición de polímero no puede ser elevada para hacer el mortero competitivo. De este estudio se obtuvieron las siguientes conclusiones generales: - Un mortero normalizado no cumple con propiedades para ser clasificado como R3 o R4. - Sin necesidad de polímero se puede obtener un mortero que cumpliría con R4 para gran parte de las características medidas - Es necesario usar relaciones a:c< 0.5 para conseguir morteros R4, - La adición de polímero mejora siempre la adherencia, abrasión, absorción capilar y resistencia a carbonatación - Las diferentes proporciones de polímero usadas siempre suponen una mejora tecnológica en propiedades mecánicas y de durabilidad. - El polímero no influye sobre la expansión y retracción del mortero. - La adherencia se mejora notablemente con el uso del polímero. - La presencia de polímero en los morteros mejoran las propiedades relacionadas con la acción del agua, por aumento del poder cementante y por lo tanto de la cohesión. El poder cementante disminuye la porosidad. Como consecuencia final de este estudio se determinó que la cantidad óptima de polímero para la segunda parte del estudio es 2.0-3.5%. La tercera parte consistió en el estudio comparativo de dos morteros: uno sin polímero (mortero A) y otro con la cantidad optimizada de polímero, concluida en la parte anterior (mortero B). Una vez definido el porcentaje de polímeros que mejor se adapta a los resultados, se plantea un nuevo esqueleto granular mejorado, tomando una nueva dosificación de tamaños de áridos, tanto para el mortero de referencia, como para el mortero con polímeros, y se procede a realizar los ensayos para su caracterización física, microestructural y de durabilidad, realizándose, además de los ensayos de la parte 1, mediciones de las propiedades microestructurales que se estudiaron a través de las técnicas de porosimetría de mercurio y microscopia electrónica de barrido (SEM); así como propiedades del mortero en estado fresco (consistencia, contenido de aire ocluido y tiempo final de fraguado). El uso del polímero frente a la no incorporación en la formulación del mortero, proporcionó al mismo de las siguientes ventajas: - Respecto a sus propiedades en estado fresco: El mortero B presentó mayor consistencia y menor cantidad de aire ocluido lo cual hace un mortero más trabajable y más dúctil al igual que más resistente porque al endurecer dejará menos huecos en su estructura interna y aumentará su durabilidad. Al tener también mayor tiempo de fraguado, pero no excesivo permite que la manejabilidad para puesta en obra sea mayor, - Respecto a sus propiedades mecánicas: Destacar la mejora en la adherencia. Es una de las principales propiedades que confiere el polímero a los morteros. Esta mayor adherencia revierte en una mejora de la adherencia al soporte, minimización de las posibles reacciones en la interfase hormigón-mortero y por lo tanto un aumento en la durabilidad de la reparación ejecutada con el mortero y por consecuencia del hormigón. - Respecto a propiedades microestructurales: la porosidad del mortero con polímero es menor y menor tamaño de poro critico susceptible de ser atacado por agentes externos causantes de deterioro. De los datos obtenidos por SEM no se observaron grandes diferencias - En cuanto a abrasión y absorción capilar el mortero B presentó mejor comportamiento como consecuencia de su menor porosidad y su estructura microscópica. - Por último el comportamiento frente al ataque de sulfatos y agua de mar, así como al frente de carbonatación, fue más resistente en el mortero con polímero por su menor permeabilidad y su menor porosidad. Para completar el estudio de esta tesis, y debido a la gran importancia que están tomando en la actualidad factores como la sostenibilidad se ha realizado un análisis de ciclo de vida de los dos morteros objeto de estudio de la segunda parte experimental.In recent years, the extended use of repair materials for buildings and structures made the development of repair mortars more and more technical. In the development of these mortars by producers, the use of polymers in the formulations is a key point, because sometimes this use is not justified when looking to the performance/price/application as a whole. This thesis is an exhaustive study to justify the use of these mortars as a response to the current growing demand for structural repair. The thesis is classified in three parts:The first part is the study of the state of the art of mortars and their constituents.In ancient times, widely used mortars were based on lime and gypsum. The Greeks and Romans developed the concept of lime mortars, introducing components such as pozzolans, hydraulic limes and marble dust as aggregates, giving very similar concrete mortars to the ones used currently. In the middle Age and Renaissance, the technology developed by the Romans was lost, due to the extensive use of stone in the civil, religious and defensive constructions. It was not until the 19th century, when J. Aspdin discovered the current cement as the main hydraulic compound. Finally in the 20th century, with the appearance of molecules such as styrene, melanin, vinyl chloride and polyester, the industry began to develop polymers which were added to the binder to form special "composites".The use of polymers in cementitious matrixes give properties to the mortar such as adhesion, Currently, the result of the polymer synthesis (polivynilacetate, styrene-butadiene, vynilacrylic and epoxy resins) is that mortars have increased resistance to water attack and therefore, they increase their durability since all reactions of deterioration are minimised (ice, humidity, biological attack,...). In the present study the polymer used was redispersible polymer powder. These polymers are encapsulated and when in contact with water, they are released from the capsule forming a gel.In the repair mortars, the only hydraulic compound is the cement and nowadays, this is the main constituent of building materials. The current trend is centered in the use of higher contents of additions (lime, pozzolana, fly ash, silica, silica fume...) in order to obtain more sustainable cements. Once the composition of mortars is analyzed, the technological improvement is centred in increasing the durability of the working life. Durability is defined as the ability to resist the action of the environment, chemical, physical, and biological attacks or any process that tends to its destruction. These processes depend on factors such as the concrete porosity and the environmental exposure. In terms of porosity, it be considered, the distribution of Macropores and mesopores and pores of the concrete structure, since not all of them are capable of causing the transportation of damaging agents, causing internal stresses on the same walls and destroying the cementing matrix.In general, deterioration processes are related to the action of water, either as direct agent or as a transport vehicle. Concrete durability also depends on the type of cement and its chemical composition (cement with high addition amounts are more resistant), water/cement ratio and cement content. The standard UNE-EN 1504 consists of 10 parts and defines the products for the protection and repair of concrete, the quality control of products, physical-chemical properties and durability. Other 65 standards that provide the test methods for the evaluation of repair systems are referenced in this standard. In the second part of this thesis there is a design of experiments with different polymer mortars (with concentrations of polymer between 0 and 25%), taking a control mortar without polymer as a reference and its physico-chemical, mechanical and durable properties were studied. For mortars with low proportion of polymer, 1 component systems are used (powder polymer) and for high polymer concentrations, water dispersion polymers are used. The mechanical properties measured were: compressive strength, flexural strength, modulus of elasticity, adhesion by direct traction and expansion-shrinkage, all of them under standards UNE. As a characterization of the durability, following tests are carried out: capillary absorption, resistance to carbonation and pull out adhesion after freeze-thaw cycles. The target of this study is to select the best mortar to make a comparison between mortars with polymer (optimized amount) and mortars without polymer. To select the optimum amount of polymer the following criteria have been considered: the mortar must have a classification R4 in terms of mechanical performance as well as in durability properties against the performed cycles, always bearing in mind that the addition of polymer cannot be too high to make the mortar competitive in price. The following general conclusions were obtained from this study: - A standard mortar does not fulfill the properties to be classified as R3 or R4 - Without polymer, a mortar may fulfill R4 for most of the measured characteristics. - It is necessary to use relations w/c ratio < 0.5 to get R4 mortars - The addition of polymer always improves adhesion, abrasion, capillary absorption and carbonation resistance - The different proportions of polymer used always improve the mechanical properties and durability. - The polymer has no influence on the expansion and shrinkage of the mortar - Adhesion is improved significantly with the use of polymer. - The presence of polymer in mortars improves the properties related to the action of the water, by the increase of the cement power and therefore the cohesion. The cementitious properties decrease the porosity. As final result of this study, it was determined that the optimum amount of polymer for the second part of the study is 2.0 - 3.5%. The third part is the comparative study between two mortars: one without polymer (A mortar) and another with the optimized amount of polymer, completed in the previous part (mortar B). Once the percentage of polymer is defined, a new granular skeleton is defined, with a new dosing of aggregate sizes, for both the reference mortar, the mortar with polymers, and the tests for physical, microstructural characterization and durability, are performed, as well as trials of part 1, measurements of the microstructural properties that were studied by scanning electron microscopy (SEM) and mercury porosimetry techniques; as well as properties of the mortar in fresh State (consistency, content of entrained air and final setting time). The use of polymer versus non polymer mortar, provided the following advantages: - In fresh state: mortar with polymer presented higher consistency and least amount of entrained air, which makes a mortar more workable and more ductile as well as more resistant because hardening will leave fewer gaps in its internal structure and increase its durability. Also allow it allows a better workability because of the longer (not excessive) setting time. - Regarding the mechanical properties: improvement in adhesion. It is one of the main properties which give the polymer to mortars. This higher adhesion results in an improvement of adhesion to the substrate, minimization of possible reactions at the concrete-mortar interface and therefore an increase in the durability of the repair carried out with mortar and concrete. - Respect to microstructural properties: the porosity of mortar with polymer is less and with smaller pore size, critical to be attacked by external agents causing deterioration. No major differences were observed from the data obtained by SEM - In terms of abrasion and capillary absorption, polymer mortar presented better performance as a result of its lower porosity and its microscopic structure. - Finally behavior against attack by sulfates and seawater, as well as to carbonation, was better in the mortar with polymer because of its lower permeability and its lower porosity. To complete the study, due to the great importance of sustainability for future market facts, the life cycle of the two mortars studied was analysed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extensive in-situ testings has shown that blast fragmentation influences the performance of downstream processes in a mine, and as a consequence, the profit of the whole operation can be greatly improved through optimised fragmentation. Other unit operations like excavation, crushing and grinding can all be assisted by altering the blast-induced fragmentation. Experimental studies have indicated that a change in blasting practice would not only influence fragmentation but fragment strength as well. The strength of the fragments produced in a blast is clearly important to the performance of the crushing and grinding circuit as it affects the energy required to break the feed to a target product size. In order to validate the effect of blasting on fragment strength several lumps of granite were blasted, under controlled conditions, using three very different explosive products. The resulting fragments were subjected to standard comminution ore characterisation tests. Obtained comminution parameters were then used to simulate the performance of a SAG mill. Modelling results indicate that changes in post blast residual rock fragment strength significantly influences the performance of the SAG mill, producing up to a 20% increase in throughput. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Astrocytes modulate synaptic strength. This effect occurs, reports a new paper, because ATP-dependent vesicular release of astrocytic glutamate acts on presynaptic neuronal NMDA receptors to increase synaptic efficacy. © 2007 Nature Publishing Group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global connectivity, for anyone, at anyplace, at anytime, to provide high-speed, high-quality, and reliable communication channels for mobile devices, is now becoming a reality. The credit mainly goes to the recent technological advances in wireless communications comprised of a wide range of technologies, services, and applications to fulfill the particular needs of end-users in different deployment scenarios (Wi-Fi, WiMAX, and 3G/4G cellular systems). In such a heterogeneous wireless environment, one of the key ingredients to provide efficient ubiquitous computing with guaranteed quality and continuity of service is the design of intelligent handoff algorithms. Traditional single-metric handoff decision algorithms, such as Received Signal Strength (RSS) based, are not efficient and intelligent enough to minimize the number of unnecessary handoffs, decision delays, and call-dropping and/or blocking probabilities. This research presented a novel approach for the design and implementation of a multi-criteria vertical handoff algorithm for heterogeneous wireless networks. Several parallel Fuzzy Logic Controllers were utilized in combination with different types of ranking algorithms and metric weighting schemes to implement two major modules: the first module estimated the necessity of handoff, and the other module was developed to select the best network as the target of handoff. Simulations based on different traffic classes, utilizing various types of wireless networks were carried out by implementing a wireless test-bed inspired by the concept of Rudimentary Network Emulator (RUNE). Simulation results indicated that the proposed scheme provided better performance in terms of minimizing the unnecessary handoffs, call dropping, and call blocking and handoff blocking probabilities. When subjected to Conversational traffic and compared against the RSS-based reference algorithm, the proposed scheme, utilizing the FTOPSIS ranking algorithm, was able to reduce the average outage probability of MSs moving with high speeds by 17%, new call blocking probability by 22%, the handoff blocking probability by 16%, and the average handoff rate by 40%. The significant reduction in the resulted handoff rate provides MS with efficient power consumption, and more available battery life. These percentages indicated a higher probability of guaranteed session continuity and quality of the currently utilized service, resulting in higher user satisfaction levels.