990 resultados para Super threshold random variable
Resumo:
AIM: This study evaluates the effect of front suspension (FS) and dual suspension (DS) mountain-bike on performance and vibrations during off-road uphill riding. METHODS: Thirteen male cyclists (27+/-5 years, 70+/-6 kg, VO(2max)59+/-6 mL.kg(-1).min(-1), mean+/-SD) performed, in a random sequence, at their lactate threshold, an off-road uphill course (1.69 km, 212 m elevation gain) with both type of bicycles. Variable measured: a) VO(2) consumption (K4b2 analyzer, Cosmed), b) power output (SRM) c) gain in altitude and d) 3-D accelerations under the saddle and at the wheel (Physilog, EPFL, Switzerland). Power spectral analy- sis (Fourier) was performed from the vertical acceleration data. RESULTS: Respectively for the FS and DS mountain bike: speed amounted to 7.5+/-0.7 km.h(-1) and 7.4+/-0.8 km.h(-1), (NS), energy expenditure 1.39+/-0.16 kW and 1.38+/-0.18, (NS), gross efficiency 0.161+/-0.013 and 0.159+/-0.013, (NS), peak frequency of vibration under the saddle 4.78+/-2.85 Hz and 2.27+/-0.2 Hz (P<0.01) and median-frequency of vertical displacements of the saddle 9.41+/-1.47 Hz and 5.78+/-2.27 Hz (P<0.01). CONCLUSION: Vibrations at the saddle level of the DS bike are of low frequencies whereas those of the FS bike are mostly of high frequencies. In the DS bike, the torque produced by the cyclist at the pedal level may generate low frequency vibrations. We conclude that the DS bike absorbs more high frequency vibrations, is more comfortable and performs as well as the FS bicycle.
Resumo:
Variation in queen number alters the genetic structure of social insect colonies, which in turn affects patterns of kin-selected conflict and cooperation. Theory suggests that shifts from single- to multiple-queen colonies are often associated with other changes in the breeding system, such as higher queen turnover, more local mating, and restricted dispersal. These changes may restrict gene flow between the two types of colonies and it has been suggested that this might ultimately lead to sympatric speciation. We performed a detailed microsatellite analysis of a large population of the ant Formica selysi, which revealed extensive variation in social structure, with 71 colonies headed by a single queen and 41 by multiple queens. This polymorphism in social structure appeared stable over time, since little change in the number of queens per colony was detected over a five-year period. Apart from queen number, single- and multiple-queen colonies had very similar breeding systems. Queen turnover was absent or very low in both types of colonies. Single- and multiple-queen colonies exhibited very small but significant levels of inbreeding, which indicates a slight deviation from random mating at a local scale and suggests that a small proportion of queens mate with related males. For both types of colonies, there was very little genetic structuring above the level of the nest, with no sign of isolation by distance. These similarities in the breeding systems were associated with a complete lack of genetic differentiation between single- and multiple-queen colonies, which provides no support for the hypothesis that change in queen number leads to restricted gene flow between social forms. Overall, this study suggests that the higher rates of queen turnover, local mating, and population structuring that are often associated with multiple-queen colonies do not appear when single- and multiple-queen colonies still coexist within the same population, but build up over time in populations consisting mostly of multiple-queen colonies.
Resumo:
We develop an analytical approach to the susceptible-infected-susceptible epidemic model that allows us to unravel the true origin of the absence of an epidemic threshold in heterogeneous networks. We find that a delicate balance between the number of high degree nodes in the network and the topological distance between them dictates the existence or absence of such a threshold. In particular, small-world random networks with a degree distribution decaying slower than an exponential have a vanishing epidemic threshold in the thermodynamic limit.
Resumo:
The objective of this study was to determine the inter- and intra-examiner reliability of pain pressure threshold algometry at various points of the abdominal wall of healthy women. Twenty-one healthy women in menacme with a mean age of 28 ± 5.4 years (range: 19-39 years) were included. All volunteers had regular menstrual cycles (27-33 days) and were right-handed and, to the best of our knowledge, none were taking medications at the time of testing. Women with a diagnosis of depression, anxiety or other mood disturbances were excluded. Women with previous abdominal surgery, any pain condition or any evidence of inflammation, hypertension, smoking, alcoholism, or inflammatory disease were also excluded. Pain perception thresholds were assessed with a pressure algometer with digital traction and compression and a measuring capacity for 5 kg. All points were localized by palpation and marked with a felt-tipped pen and each individual was evaluated over a period of 2 days in two consecutive sessions, each session consisting of a set of 14 point measurements repeated twice by two examiners in random sequence. There was no statistically significant difference in the mean pain threshold obtained by the two examiners on 2 diferent days (examiner A: P = 1.00; examiner B: P = 0.75; Wilcoxon matched pairs test). There was excellent/good agreement between examiners for all days and all points. Our results have established baseline values to which future researchers will be able to refer. They show that pressure algometry is a reliable measure for pain perception in the abdominal wall of healthy women.
Resumo:
Ce mémoire est une poursuite de l’étude de la superintégrabilité classique et quantique dans un espace euclidien de dimension deux avec une intégrale du mouvement d’ordre trois. Il est constitué d’un article. Puisque les classifications de tous les Hamiltoniens séparables en coordonnées cartésiennes et polaires sont déjà complétées, nous apportons à ce tableau l’étude de ces systèmes séparables en coordonnées paraboliques. Premièrement, nous dérivons les équations déterminantes d’un système en coordonnées paraboliques et ensuite nous résolvons les équations obtenues afin de trouver les intégrales d’ordre trois pour un potentiel qui permet la séparation en coordonnées paraboliques. Finalement, nous démontrons que toutes les intégrales d’ordre trois pour les potentiels séparables en coordonnées paraboliques dans l’espace euclidien de dimension deux sont réductibles. Dans la conclusion de l’article nous analysons les différences entre les potentiels séparables en coordonnées cartésiennes et polaires d’un côté et en coordonnées paraboliques d’une autre côté. Mots clés: intégrabilité, superintégrabilité, mécanique classique, mécanique quantique, Hamiltonien, séparation de variable, commutation.
Resumo:
Les polymères sensibles à des stimuli ont été largement étudiés ces dernières années notamment en vue d’applications biomédicales. Ceux-ci ont la capacité de changer leurs propriétés de solubilité face à des variations de pH ou de température. Le but de cette thèse concerne la synthèse et l’étude de nouveaux diblocs composés de deux copolymères aléatoires. Les polymères ont été obtenus par polymérisation radicalaire contrôlée du type RAFT (reversible addition-fragmentation chain-transfer). Les polymères à bloc sont formés de monomères de méthacrylates et/ou d’acrylamides dont les polymères sont reconnus comme thermosensibles et sensible au pH. Premièrement, les copolymères à bloc aléatoires du type AnBm-b-ApBq ont été synthétisés à partir de N-n-propylacrylamide (nPA) et de N-ethylacrylamide (EA), respectivement A et B, par polymérisation RAFT. La cinétique de copolymérisation des poly(nPAx-co-EA1-x)-block-poly(nPAy-co-EA1-y) et leur composition ont été étudiées afin de caractériser et évaluer les propriétés physico-chimiques des copolymères à bloc aléatoires avec un faible indice de polydispersité . Leurs caractères thermosensibles ont été étudiés en solution aqueuse par spectroscopie UV-Vis, turbidimétrie et analyse de la diffusion dynamique de la lumière (DLS). Les points de trouble (CP) observés des blocs individuels et des copolymères formés démontrent des phases de transitions bien définies lors de la chauffe. Un grand nombre de macromolécules naturels démontrent des réponses aux stimuli externes tels que le pH et la température. Aussi, un troisième monomère, 2-diethylaminoethyl methacrylate (DEAEMA), a été ajouté à la synthèse pour former des copolymères à bloc , sous la forme AnBm-b-ApCq , et qui offre une double réponse (pH et température), modulable en solution. Ce type de polymère, aux multiples stimuli, de la forme poly(nPAx-co-DEAEMA1-x)-block-poly(nPAy-co-EA1-y), a lui aussi été synthétisé par polymérisation RAFT. Les résultats indiquent des copolymères à bloc aléatoires aux propriétés physico-chimiques différentes des premiers diblocs, notamment leur solubilité face aux variations de pH et de température. Enfin, le changement d’hydrophobie des copolymères a été étudié en faisant varier la longueur des séquences des blocs. Il est reconnu que la longueur relative des blocs affecte les mécanismes d’agrégation d’un copolymère amphiphile. Ainsi avec différents stimuli de pH et/ou de température, les expériences effectuées sur des copolymères à blocaléatoires de différentes longueurs montrent des comportements d’agrégation intéressants, évoluant sous différentes formes micellaires, d’agrégats et de vésicules.
Resumo:
We analyze a finite horizon, single product, periodic review model in which pricing and production/inventory decisions are made simultaneously. Demands in different periods are random variables that are independent of each other and their distributions depend on the product price. Pricing and ordering decisions are made at the beginning of each period and all shortages are backlogged. Ordering cost includes both a fixed cost and a variable cost proportional to the amount ordered. The objective is to find an inventory policy and a pricing strategy maximizing expected profit over the finite horizon. We show that when the demand model is additive, the profit-to-go functions are k-concave and hence an (s,S,p) policy is optimal. In such a policy, the period inventory is managed based on the classical (s,S) policy and price is determined based on the inventory position at the beginning of each period. For more general demand functions, i.e., multiplicative plus additive functions, we demonstrate that the profit-to-go function is not necessarily k-concave and an (s,S,p) policy is not necessarily optimal. We introduce a new concept, the symmetric k-concave functions and apply it to provide a characterization of the optimal policy.
Resumo:
We analyze an infinite horizon, single product, periodic review model in which pricing and production/inventory decisions are made simultaneously. Demands in different periods are identically distributed random variables that are independent of each other and their distributions depend on the product price. Pricing and ordering decisions are made at the beginning of each period and all shortages are backlogged. Ordering cost includes both a fixed cost and a variable cost proportional to the amount ordered. The objective is to maximize expected discounted, or expected average profit over the infinite planning horizon. We show that a stationary (s,S,p) policy is optimal for both the discounted and average profit models with general demand functions. In such a policy, the period inventory is managed based on the classical (s,S) policy and price is determined based on the inventory position at the beginning of each period.
Resumo:
The human electroencephalogram (EEG) is globally characterized by a 1/f power spectrum superimposed with certain peaks, whereby the "alpha peak" in a frequency range of 8-14 Hz is the most prominent one for relaxed states of wakefulness. We present simulations of a minimal dynamical network model of leaky integrator neurons attached to the nodes of an evolving directed and weighted random graph (an Erdos-Renyi graph). We derive a model of the dendritic field potential (DFP) for the neurons leading to a simulated EEG that describes the global activity of the network. Depending on the network size, we find an oscillatory transition of the simulated EEG when the network reaches a critical connectivity. This transition, indicated by a suitably defined order parameter, is reflected by a sudden change of the network's topology when super-cycles are formed from merging isolated loops. After the oscillatory transition, the power spectra of simulated EEG time series exhibit a 1/f continuum superimposed with certain peaks. (c) 2007 Elsevier B.V. All rights reserved.
Synapsing variable length crossover: An algorithm for crossing and comparing variable length genomes
Resumo:
The Synapsing Variable Length Crossover (SVLC) algorithm provides a biologically inspired method for performing meaningful crossover between variable length genomes. In addition to providing a rationale for variable length crossover it also provides a genotypic similarity metric for variable length genomes enabling standard niche formation techniques to be used with variable length genomes. Unlike other variable length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses non-random crossover points based on the common parental sequence similarity. The SVLC Algorithm recurrently "glues" or synapses homogenous genetic sub-sequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable length test problem the SVLC algorithm is shown to outperform current variable length crossover techniques. The SVLC algorithm is also shown to work in a more realistic robot neural network controller evolution application.
Resumo:
The synapsing variable-length crossover (SVLC algorithm provides a biologically inspired method for performing meaningful crossover between variable-length genomes. In addition to providing a rationale for variable-length crossover, it also provides a genotypic similarity metric for variable-length genomes, enabling standard niche formation techniques to be used with variable-length genomes. Unlike other variable-length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses non-random crossover points based on the common parental sequence similarity. The SVLC algorithm recurrently "glues" or synapses homogenous genetic subsequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable-length test problem, the SVLC algorithm compares favorably with current variable-length crossover techniques. The variable-length approach is further advocated by demonstrating how a variable-length genetic algorithm (GA) can obtain a high fitness solution in fewer iterations than a traditional fixed-length GA in a two-dimensional vector approximation task.
Resumo:
The redistribution of a finite amount of martian surface dust during global dust storms and in the intervening periods has been modelled in a dust lifting version of the UK Mars General Circulation Model. When using a constant, uniform threshold in the model’s wind stress lifting parameterisation and assuming an unlimited supply of surface dust, multiannual simulations displayed some variability in dust lifting activity from year to year, arising from internal variability manifested in surface wind stress, but dust storms were limited in size and formed within a relatively short seasonal window. Lifting thresholds were then allowed to vary at each model gridpoint, dependent on the rates of emission or deposition of dust. This enhanced interannual variability in dust storm magnitude and timing, such that model storms covered most of the observed ranges in size and initiation date within a single multiannual simulation. Peak storm magnitude in a given year was primarily determined by the availability of surface dust at a number of key sites in the southern hemisphere. The observed global dust storm (GDS) frequency of roughly one in every 3 years was approximately reproduced, but the model failed to generate these GDSs spontaneously in the southern hemisphere, where they have typically been observed to initiate. After several years of simulation, the surface threshold field—a proxy for net change in surface dust density—showed good qualitative agreement with the observed pattern of martian surface dust cover. The model produced a net northward cross-equatorial dust mass flux, which necessitated the addition of an artificial threshold decrease rate in order to allow the continued generation of dust storms over the course of a multiannual simulation. At standard model resolution, for the southward mass flux due to cross-equatorial flushing storms to offset the northward flux due to GDSs on a timescale of ∼3 years would require an increase in the former by a factor of 3–4. Results at higher model resolution and uncertainties in dust vertical profiles mean that quasi-periodic redistribution of dust on such a timescale nevertheless appears to be a plausible explanation for the observed GDS frequency.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
Habitat use and the processes which determine fish distribution were evaluated at the reef flat and reef crest zones of a tropical, algal-dominated reef. Our comparisons indicated significant differences in the majority of the evaluated environmental characteristics between zones. Also, significant differences in the abundances of twelve, from thirteen analyzed species, were observed within and between-sites. According to null models, non-random patterns of species co-occurrences were significant, suggesting that fish guilds in both zones were non-randomly structured. Unexpectedly, structural complexity negatively affected overall species richness, but had a major positive influence on highly site-attached species such as a damselfish. Depth and substrate composition, particularly macroalgae cover, were positive determinants for the fish assemblage structure in the studied reef, prevailing over factors such as structural complexity and live coral cover. Our results are conflicting with other studies carried out in coral-dominated reefs of the Caribbean and Pacific, therefore supporting the idea that the factors which may potentially influence reef fish composition are highly site-dependent and variable.
Resumo:
A new technique to analyze fusion data is developed. From experimental cross sections and results of coupled-channel calculations a dimensionless function is constructed. In collisions of strongly bound nuclei this quantity is very close to a universal function of a variable related to the collision energy, whereas for weakly bound projectiles the effects of breakup coupling are measured by the deviations with respect to this universal function. This technique is applied to collisions of stable and unstable weakly bound isotopes.