924 resultados para Seeing the other


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rest tremor, rigidity, and slowness of movements-considered to be mainly due to markedly reduced levels of dopamine (DA) in the basal ganglia-are characteristic motor symptoms of Parkinson's disease (PD). Although there is yet no cure for this illness, several drugs can alleviate the motor symptoms. Among these symptomatic therapies, L-dopa is the most effective. As a precursor to DA, it is able to replace the loss of DA in the basal ganglia. In the long run L-dopa has, however, disadvantages. Motor response complications, such as shortening of the duration of drug effect ("wearing-off"), develop in many patients. In addition, extensive peripheral metabolism of L-dopa by aromatic amino acid decarboxylase and catechol-O-methyltransferase (COMT) results in its short half-life, low bioavailability, and reduced efficacy. Entacapone, a nitrocatechol-structured compound, is a highly selective, reversible, and orally active inhibitor of COMT. It increases the bioavailability of L-dopa by reducing its peripheral elimination rate. Entacapone extends the duration of clinical response to each L-dopa dose in PD patients with wearing-off fluctuations. COMT is important in the metabolism of catecholamines. Its inhibition could, therefore, theoretically lead to adverse cardiovascular reactions, especially in circumstances of enhanced sympathetic activity (physical exercise). PD patients may be particularly vulnerable to such effects due to high prevalence of cardiovascular autonomic dysfunction, and the common use of monoamine oxidase B inhibitor selegiline, another drug with effects on catecholamine metabolism. Both entacapone and selegiline enhance L-dopa's clinical effect. Their co-administration may therefore lead to pharmacodynamic interactions, either beneficial (improved L-dopa efficacy) or harmful (increased dyskinesia). We investigated the effects of repeated dosing (3-5 daily doses for 1-2 weeks) of entacapone 200 mg administered either with or without selegiline (10 mg once daily), on several safety and efficacy parameters in 39 L-dopa-treated patients with mild to moderate PD in three double-blind placebo-controlled, crossover studies. In the first two, the cardiovascular, clinical, and biochemical responses were assessed repeatedly for 6 hours after drug intake, first with L-dopa only (control), and then after a 2 weeks on study drugs (entacapone vs. entacapone plus selegiline in one; entacapone vs. selegiline vs. entacapone plus selegiline in the other). The third study included cardiovascular reflex and spiroergometric exercise testing, first after overnight L-dopa withdrawal (control), and then after 1 week on entacapone plus selegiline as adjuncts to L-dopa. Ambulatory ECG was recorded in two of the studies. Blood pressure, heart rate, ECG, cardiovascular autonomic function, cardiorespiratory exercise responses, and the resting/exercise levels of circulating catecholamines remained unaffected by entacapone, irrespective of selegiline. Entacapone significantly enhanced both L-dopa bioavailability and its clinical response, the latter being more pronounced with the co-administration of selegiline. Dyskinesias were also increased during simultaneous use of both entacapone and selegiline as L-dopa adjuncts. Entacapone had no effect on either work capacity or work efficiency. The drug was well tolerated, both with and without selegiline. Conclusions: the use of entacapone-either alone or combined with selegiline-seems to be hemodynamically safe in L-dopa-treated PD patients, also during maximal physical effort. This is in line with the safety experience from larger phase III studies. Entacapone had no effect on cardiovascular autonomic function. Concomitant administration of entacapone and selegiline may enhance L-dopa's clinical efficacy but may also lead to increased dyskinesia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Structural relaxation behavior of a rapidly quenched (RQ) and a slowly cooled Pd40Cu30Ni10P20 metallic glass was investigated and compared. Differential scanning calorimetry was employed to monitor the relaxation enthalpies at the glass transition temperature, T-g , and the Kolrausch-Williams-Watts (KWW) stretched exponential function was used to describe its variation with annealing time. It was found that the rate of enthalpy recovery is higher in the ribbon, implying that the bulk is more resistant to relaxation at low temperatures of annealing. This was attributed to the possibility of cooling rate affecting the locations where the glasses get trapped within the potential energy landscape. The RQ process traps a larger amount of free volume, resulting in higher fragility, and in turn relaxes at the slightest thermal excitation (annealing). The slowly cooled bulk metallic glass (BMG), on the other hand, entraps lower free volume and has more short-range ordering, hence requiring a large amount of perturbation to access lower energy basins.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space viewpoint is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces $\mathcal{S_I}$ and $\mathcal{S_C}$ and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating $\mathcal{S_I}$ and $\mathcal{S_C}$ is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. The average case CC of the relevant greater-than (GT) function is characterized within two bits. In the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abrasion and slurry erosion behaviour of chromium-manganese iron samples with chromium (Cr) in the range similar to 16-19% and manganese (Mn) at 5 and 10% levels have been characterized for hardness followed by microstructural examination using optical and scanning electron microscopy. Positron lifetime studies have been conducted to understand the defects/microporosity influence on the microstructure. The samples were heat treated and characterized to understand the structural transformations in the matrix. The data reveals that hardness decreased with increase in Mn content from 5 to 10% in the first instance and then increase in the section size in the other case, irrespective of the sample conditions. The abrasion and slurry erosion losses show increase with increase in the section size as well as with increase in Mn content. The positron results show that as hardness increases from as-cast to heat treated sample, the positron trapping rate and hence defect concentration showed opposite trend as expected. So a good correlation between defects concentration and the hardness has been observed. These findings also corroborate well with the microstructural features obtained from optical and scanning electron microscopy. (C) 2009 Elsevier B. V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis, the solar wind-magnetosphere-ionosphere coupling is studied observationally, with the main focus on the ionospheric currents in the auroral region. The thesis consists of five research articles and an introductory part that summarises the most important results reached in the articles and places them in a wider context within the field of space physics. Ionospheric measurements are provided by the International Monitor for Auroral Geomagnetic Effects (IMAGE) magnetometer network, by the low-orbit CHAllenging Minisatellite Payload (CHAMP) satellite, by the European Incoherent SCATter (EISCAT) radar, and by the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) satellite. Magnetospheric observations, on the other hand, are acquired from the four spacecraft of the Cluster mission, and solar wind observations from the Advanced Composition Explorer (ACE) and Wind spacecraft. Within the framework of this study, a new method for determining the ionospheric currents from low-orbit satellite-based magnetic field data is developed. In contrast to previous techniques, all three current density components can be determined on a matching spatial scale, and the validity of the necessary one-dimensionality approximation, and thus, the quality of the results, can be estimated directly from the data. The new method is applied to derive an empirical model for estimating the Hall-to-Pedersen conductance ratio from ground-based magnetic field data, and to investigate the statistical dependence of the large-scale ionospheric currents on solar wind and geomagnetic parameters. Equations describing the amount of field-aligned current in the auroral region, as well as the location of the auroral electrojets, as a function of these parameters are derived. Moreover, the mesoscale (10-1000 km) ionospheric equivalent currents related to two magnetotail plasma sheet phenomena, bursty bulk flows and flux ropes, are studied. Based on the analysis of 22 events, the typical equivalent current pattern related to bursty bulk flows is established. For the flux ropes, on the other hand, only two conjugate events are found. As the equivalent current patterns during these two events are not similar, it is suggested that the ionospheric signatures of a flux rope depend on the orientation and the length of the structure, but analysis of additional events is required to determine the possible ionospheric connection of flux ropes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pack ice is an aggregate of ice floes drifting on the sea surface. The forces controlling the motion and deformation of pack ice are air and water drag forces, sea surface tilt, Coriolis force and the internal force due to the interaction between ice floes. In this thesis, the mechanical behavior of compacted pack ice is investigated using theoretical and numerical methods, focusing on the three basic material properties: compressive strength, yield curve and flow rule. A high-resolution three-category sea ice model is applied to investigate the sea ice dynamics in two small basins, the whole Gulf Riga and the inside Pärnu Bay, focusing on the calibration of the compressive strength for thin ice. These two basins are on the scales of 100 km and 20 km, respectively, with typical ice thickness of 10-30 cm. The model is found capable of capturing the main characteristics of the ice dynamics. The compressive strength is calibrated to be about 30 kPa, consistent with the values from most large-scale sea ice dynamic studies. In addition, the numerical study in Pärnu Bay suggests that the shear strength drops significantly when the ice-floe size markedly decreases. A characteristic inversion method is developed to probe the yield curve of compacted pack ice. The basis of this method is the relationship between the intersection angle of linear kinematic features (LKFs) in sea ice and the slope of the yield curve. A summary of the observed LKFs shows that they can be basically divided into three groups: intersecting leads, uniaxial opening leads and uniaxial pressure ridges. Based on the available observed angles, the yield curve is determined to be a curved diamond. Comparisons of this yield curve with those from other methods show that it possesses almost all the advantages identified by the other methods. A new constitutive law is proposed, where the yield curve is a diamond and the flow rule is a combination of the normal and co-axial flow rule. The non-normal co-axial flow rule is necessary for the Coulombic yield constraint. This constitutive law not only captures the main features of forming LKFs but also takes the advantage of avoiding overestimating divergence during shear deformation. Moreover, this study provides a method for observing the flow rule for pack ice during deformation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Experiments were conducted with two, smooth hills, lying well within the boundary layer over a flat plate mounted in a wind tunnel. One hill was shallow, with peak height 1.5 mm and width 50 mm; the other, steep, 3 mm high and 30 mm wide. Since the hills occupied one-half of the tunnel span, streamwise vorticity formed near the hills' edge. At a freestream speed of 3.5 m/s, streaks formed with inflectional wall-normal and spanwise velocity profiles but without effecting transition. Transition, observed at 7.5 m/s, took different routes with the two hills. With the steep hill, streamwise velocity signals exhibited the passage of a wave packet which intensified before breakdown to turbulence. With the shallow hill there was a broad range of frequencies present immediately downstream of the hill. These fluctuations grew continuously and transition occurred within a shorter distance. Since the size of the streamwise vorticity generated at the hill edge is of the order of the hill height, the shallow hill generates vorticity closer to the wall and supports an earlier transition, whereas the steep hill creates a thicker vortex and associated streaks which exhibit oscillations due to their own instability as an additional precursor stage before transition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the nursery pollination system of figs (Ficus, Moraceae), flower-bearing receptacles called syconia breed pollinating wasps and are units of both pollination and seed dispersal. Pollinators and mammalian seed dispersers are attracted to syconia by volatile organic compounds (VOCs). In monoecious figs, syconia produce both wasps and seeds, while in (gyno)dioecious figs, male (gall) fig trees produce wasps and female (seed) fig trees produce seeds. VOCs were collected using dynamic headspace adsorption methods on freshly collected figs from different trees using Super Q® collection traps. VOC profiles were determined using gas chromatography–mass spectrometry (GC–MS).The VOC profile of receptive and dispersal phase figs were clearly different only in the dioecious mammal-dispersed Ficus hispida but not in dioecious bird-dispersed F. exasperata and monoecious bird-dispersed F. tsjahela. The VOC profile of dispersal phase female figs was clearly different from that of male figs only in F. hispida but not in F. exasperata, as predicted from the phenology of syconium production which only in F. hispida overlaps between male and female trees. Greater difference in VOC profile in F. hispida might ensure preferential removal of seed figs by dispersal agents when gall figs are simultaneously available.The VOC profile of only mammal-dispersed female figs of F. hispida had high levels of fatty acid derivatives such as amyl-acetates and 2-heptanone, while monoterpenes, sesquiterpenes and shikimic acid derivatives were predominant in the other syconial types. A bird- and mammal-repellent compound methyl anthranilate occurred only in gall figs of both dioecious species, as expected, since gall figs containing wasp pollinators should not be consumed by dispersal agents.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Currently, we live in an era characterized by the completion and first runs of the LHC accelerator at CERN, which is hoped to provide the first experimental hints of what lies beyond the Standard Model of particle physics. In addition, the last decade has witnessed a new dawn of cosmology, where it has truly emerged as a precision science. Largely due to the WMAP measurements of the cosmic microwave background, we now believe to have quantitative control of much of the history of our universe. These two experimental windows offer us not only an unprecedented view of the smallest and largest structures of the universe, but also a glimpse at the very first moments in its history. At the same time, they require the theorists to focus on the fundamental challenges awaiting at the boundary of high energy particle physics and cosmology. What were the contents and properties of matter in the early universe? How is one to describe its interactions? What kind of implications do the various models of physics beyond the Standard Model have on the subsequent evolution of the universe? In this thesis, we explore the connection between in particular supersymmetric theories and the evolution of the early universe. First, we provide the reader with a general introduction to modern day particle cosmology from two angles: on one hand by reviewing our current knowledge of the history of the early universe, and on the other hand by introducing the basics of supersymmetry and its derivatives. Subsequently, with the help of the developed tools, we direct the attention to the specific questions addressed in the three original articles that form the main scientific contents of the thesis. Each of these papers concerns a distinct cosmological problem, ranging from the generation of the matter-antimatter asymmetry to inflation, and finally to the origin or very early stage of the universe. They nevertheless share a common factor in their use of the machinery of supersymmetric theories to address open questions in the corresponding cosmological models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aerosols impact the planet and our daily lives through various effects, perhaps most notably those related to their climatic and health-related consequences. While there are several primary particle sources, secondary new particle formation from precursor vapors is also known to be a frequent, global phenomenon. Nevertheless, the formation mechanism of new particles, as well as the vapors participating in the process, remain a mystery. This thesis consists of studies on new particle formation specifically from the point of view of numerical modeling. A dependence of formation rate of 3 nm particles on the sulphuric acid concentration to the power of 1-2 has been observed. This suggests nucleation mechanism to be of first or second order with respect to the sulphuric acid concentration, in other words the mechanisms based on activation or kinetic collision of clusters. However, model studies have had difficulties in replicating the small exponents observed in nature. The work done in this thesis indicates that the exponents may be lowered by the participation of a co-condensing (and potentially nucleating) low-volatility organic vapor, or by increasing the assumed size of the critical clusters. On the other hand, the presented new and more accurate method for determining the exponent indicates high diurnal variability. Additionally, these studies included several semi-empirical nucleation rate parameterizations as well as a detailed investigation of the analysis used to determine the apparent particle formation rate. Due to their high proportion of the earth's surface area, oceans could potentially prove to be climatically significant sources of secondary particles. In the lack of marine observation data, new particle formation events in a coastal region were parameterized and studied. Since the formation mechanism is believed to be similar, the new parameterization was applied in a marine scenario. The work showed that marine CCN production is feasible in the presence of additional vapors contributing to particle growth. Finally, a new method to estimate concentrations of condensing organics was developed. The algorithm utilizes a Markov chain Monte Carlo method to determine the required combination of vapor concentrations by comparing a measured particle size distribution with one from an aerosol dynamics process model. The evaluation indicated excellent agreement against model data, and initial results with field data appear sound as well.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sequence design problems are considered in this paper. The problem of sum power minimization in a spread spectrum system can be reduced to the problem of sum capacity maximization, and vice versa. A solution to one of the problems yields a solution to the other. Subsequently, conceptually simple sequence design algorithms known to hold for the white-noise case are extended to the colored noise case. The algorithms yield an upper bound of 2N - L on the number of sequences where N is the processing gain and L the number of non-interfering subsets of users. If some users (at most N - 1) are allowed to signal along a limited number of multiple dimensions, then N orthogonal sequences suffice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two subunits of eukaryotic RNA polymerase II, Rpb7 and Rpb4, form a subcomplex that has counterparts in RNA polymerases I and III. Although a medium resolution structure has been solved for the 12-subunit RNA polymerase II, the relative contributions of the contact regions between the subcomplex and the core polymerase and the consequences of disrupting them have not been studied in detail. We have identified mutations in the N-terminal ribonucleoprotein-like domain of Saccharomyces cerevisiae Rpb7 that affect its role in certain stress responses, such as growth at high temperature and sporulation. These mutations increase the dependence of Rpb7 on Rpb4 for interaction with the rest of the polymerase. Complementation analysis and RNA polymerase pulldown assays reveal that the Rpb4 center dot Rbp7 subcomplex associates with the rest of the core RNA polymerase II through two crucial interaction points: one at the N-terminal ribonucleoprotein-like domain of Rpb7 and the other at the partially ordered N-terminal region of Rpb4. These findings are in agreement with the crystal structure of the 12-subunit polymerase. We show here that the weak interaction predicted for the N-terminal region of Rpb4 with Rpb2 in the crystal structure actually plays a significant role in interaction of the subcomplex with the core in vivo. Our mutant analysis also suggests that Rpb7 plays an essential role in the cell through its ability to interact with the rest of the polymerase.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

New metal-organic frameworks (MOFs) [Ni(C12N2H10)(H2O)][C6H3(COO)2(COOH)] (I), [Co2(H2O)6][C6H3(COO)3]2·(C4N2H12)(H2O)2 (II), [Ni2(H2O)6][C6H3(COO)3]2·(C4N2H12)(H2O)2 (III), [Ni(C13N2H14)(H2O)][C6H3(COO)2(COOH)] (IV), [Ni3(H2O)8][C6H3(COO)3] (V) and [Co(C4N2H4)(H2O)][C6H3(COO)3] (VI) {C6H3(COOH)3 = trimesic acid, C12N2H10 = 1,10-phenanthroline, C4N2H12 = piperazine dication, C13N2H14 = 1,3-bis(4-pyridyl)propane and C4N2H4 = pyrazine} have been synthesized by using an interface between two immiscible solvents, water and cyclohexanol. The compounds are constructed from the connectivity between the octahedral M2+ (M = Ni, Co) ions coordinated by oxygen atoms of carboxylate groups and water molecules and/or by nitrogen atoms of the ligand amines and the carboxylate units to form a variety of structures of different dimensionality. Strong hydrogen bonds of the type O-H···O are present in all the compounds, which give rise to supramolecularly organized higher-dimensional structures. In some cases ··· interactions are also observed. Magnetic studies indicate weak ferromagnetic interactions in I, IV and V and weak antiferromagnetic interactions in the other compounds (II, III and VI). All the compounds have been characterized by a variety of techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we study the behaviour of the slotted Aloha multiple access scheme with a finite number of users under different traffic loads and optimize the retransmission probability q(r) for various settings, cost objectives and policies. First, we formulate the problem as a parameter optimization problem and use certain efficient smoothed functional algorithms for finding the optimal retransmission probability parameter. Next, we propose two classes of multi-level closed-loop feedback policies (for finding in each case the retransmission probability qr that now depends on the current system state) and apply the above algorithms for finding an optimal policy within each class of policies. While one of the policy classes depends on the number of backlogged nodes in the system, the other depends on the number of time slots since the last successful transmission. The latter policies are more realistic as it is difficult to keep track of the number of backlogged nodes at each instant. We investigate the effect of increasing the number of levels in the feedback policies. Wen also investigate the effects of using different cost functions (withn and without penalization) in our algorithms and the corresponding change in the throughput and delay using these. Both of our algorithms use two-timescale stochastic approximation. One of the algorithms uses one simulation while the other uses two simulations of the system. The two-simulation algorithm is seen to perform better than the other algorithm. Optimal multi-level closed-loop policies are seen to perform better than optimal open-loop policies. The performance further improves when more levels are used in the feedback policies.