943 resultados para Time-Fractional Diffusion-Wave Problem
Resumo:
In this paper we propose a metaheuristic to solve a new version of the Maximum Capture Problem. In the original MCP, market capture is obtained by lower traveling distances or lower traveling time, in this new version not only the traveling time but also the waiting time will affect the market share. This problem is hard to solve using standard optimization techniques. Metaheuristics are shown to offer accurate results within acceptable computing times.
Resumo:
We implemented Biot-type porous wave equations in a pseudo-spectral numerical modeling algorithm for the simulation of Stoneley waves in porous media. Fourier and Chebyshev methods are used to compute the spatial derivatives along the horizontal and vertical directions, respectively. To prevent from overly short time steps due to the small grid spacing at the top and bottom of the model as a consequence of the Chebyshev operator, the mesh is stretched in the vertical direction. As a large benefit, the Chebyshev operator allows for an explicit treatment of interfaces. Boundary conditions can be implemented with a characteristics approach. The characteristic variables are evaluated at zero viscosity. We use this approach to model seismic wave propagation at the interface between a fluid and a porous medium. Each medium is represented by a different mesh and the two meshes are connected through the above described characteristics domain-decomposition method. We show an experiment for sealed pore boundary conditions, where we first compare the numerical solution to an analytical solution. We then show the influence of heterogeneity and viscosity of the pore fluid on the propagation of the Stoneley wave and surface waves in general.
Resumo:
Motivation is the key to learning. The present study is about the relationship between intrinsic and extrinsic motivation as they affect learning with regard to students who are learning EFL for the first time. Cape Verdean seventh grade students learning English for the first time are generally very enthusiastic about the language before they start learning it in the high school. However, that enthusiasm seems not to be maintained throughout the school year and oftentimes teachers hear them complain about the difficulties of mastering aspects of the language. It seems that for some reason their motivation is undermined. Why does that happen? Is it the students’ fault or the teacher’s? If it the teacher’s fault, which motivation strategies work best to cope with this problem: intrinsic or extrinsic? With this in mind I asked the question: What is the relationship between students’ needs, interests, goals and expectations to learn English as a foreign language and teachers’ roles as facilitators and motivators? There are many studies that have been carried out in the field of motivation, and up to now, there seems to be no consensus of which is the best. For the purposes of this paper, three main theories will be discussed that have prevailed in the field of motivational psychology: the behavioural, the cognitive and the humanistic theories. Within these theories sub-theories are discussed and their relationship is explained with intrinsic and extrinsic motivation regarding Cape Verdean students learning English for the first time.
Resumo:
We obtain minimax lower bounds on the regret for the classicaltwo--armed bandit problem. We provide a finite--sample minimax version of the well--known log $n$ asymptotic lower bound of Lai and Robbins. Also, in contrast to the log $n$ asymptotic results on the regret, we show that the minimax regret is achieved by mere random guessing under fairly mild conditions on the set of allowable configurations of the two arms. That is, we show that for {\sl every} allocation rule and for {\sl every} $n$, there is a configuration such that the regret at time $n$ is at least 1 -- $\epsilon$ times the regret of random guessing, where $\epsilon$ is any small positive constant.
Resumo:
Previous covering models for emergency service consider all the calls to be of the sameimportance and impose the same waiting time constraints independently of the service's priority.This type of constraint is clearly inappropriate in many contexts. For example, in urban medicalemergency services, calls that involve danger to human life deserve higher priority over calls formore routine incidents. A realistic model in such a context should allow prioritizing the calls forservice.In this paper a covering model which considers different priority levels is formulated andsolved. The model heritages its formulation from previous research on Maximum CoverageModels and incorporates results from Queuing Theory, in particular Priority Queuing. Theadditional complexity incorporated in the model justifies the use of a heuristic procedure.
Resumo:
The paper presents a new model based on the basic Maximum Capture model,MAXCAP. The New Chance Constrained Maximum Capture modelintroduces astochastic threshold constraint, which recognises the fact that a facilitycan be open only if a minimum level of demand is captured. A metaheuristicbased on MAX MIN ANT system and TABU search procedure is presented tosolve the model. This is the first time that the MAX MIN ANT system isadapted to solve a location problem. Computational experience and anapplication to 55 node network are also presented.
Resumo:
In this paper we propose a metaheuristic to solve a new version of the Maximum CaptureProblem. In the original MCP, market capture is obtained by lower traveling distances or lowertraveling time, in this new version not only the traveling time but also the waiting time willaffect the market share. This problem is hard to solve using standard optimization techniques.Metaheuristics are shown to offer accurate results within acceptable computing times.
Resumo:
In alkaline lavas, the chemical zoning of megacrystals of spinel is due to the cationic exchange between the latter and the host lava. The application of Fick's law to cationic diffusion profiles allows to calculate the time these crystals have stayed in the lava. Those which are in a chemical equilibrium were in contact with the lava during 20 to 30 days, whereas megacrystals lacking this equilibrium were in contact only for 3 or 4 days. The duration of the rise of an ultrabasic nodule in the volcanic chimney was calculated by applying Stokes' law.
Resumo:
PURPOSE: This study investigated the isolated and combined effects of heat [temperate (22 °C/30 % rH) vs. hot (35 °C/40 % rH)] and hypoxia [sea level (FiO2 0.21) vs. moderate altitude (FiO2 0.15)] on exercise capacity and neuromuscular fatigue characteristics. METHODS: Eleven physically active subjects cycled to exhaustion at constant workload (66 % of the power output associated with their maximal oxygen uptake in temperate conditions) in four different environmental conditions [temperate/sea level (control), hot/sea level (hot), temperate/moderate altitude (hypoxia) and hot/moderate altitude (hot + hypoxia)]. Torque and electromyography (EMG) responses following electrical stimulation of the tibial nerve (plantar-flexion; soleus) were recorded before and 5 min after exercise. RESULTS: Time to exhaustion was reduced (P < 0.05) in hot (-35 ± 15 %) or hypoxia (-36 ± 14 %) compared to control (61 ± 28 min), while hot + hypoxia (-51 ± 20 %) further compromised exercise capacity (P < 0.05). However, the effect of temperature or altitude on end-exercise core temperature (P = 0.089 and P = 0.070, respectively) and rating of perceived exertion (P > 0.05) did not reach significance. Maximal voluntary contraction torque, voluntary activation (twitch interpolation) and peak twitch torque decreased from pre- to post-exercise (-9 ± 1, -4 ± 1 and -6 ± 1 % all trials compounded, respectively; P < 0.05), with no effect of the temperature or altitude. M-wave amplitude and root mean square activity were reduced (P < 0.05) in hot compared to temperate conditions, while normalized maximal EMG activity did not change. Altitude had no effect on any measured parameters. CONCLUSION: Moderate hypoxia in combination with heat stress reduces cycling time to exhaustion without modifying neuromuscular fatigue characteristics. Impaired oxygen delivery or increased cardiovascular strain, increasing relative exercise intensity, may have also contributed to earlier exercise cessation.
Resumo:
BACKGROUND: Despite major advances in care of premature infants, survivors exhibit mild cognitive deficits in around 40%. Beside severe intraventricular haemorrhages (IVH) and cystic periventricular leucomalacia (PVL), more subtle patterns such as grade I and II IVH, punctuate WM lesions and diffuse PVL might be linked to the cognitive deficits. Grey matter disease is also recognized to contribute to long-term cognitive impairment.¦OBJECTIVE: We intend to use novel MR techniques to study more precisely the different injury patterns. In particular MP2RAGE (magnetization prepared dual rapid echo gradient) produces high-resolution quantitative T1 relaxation maps. This contrast is known to reflect tissue anomalies such as white matter injury in general and dysmyelination in particular. We also used diffusion tensor imaging, a quantitative technique known to reflect white matter maturation and disease.¦DESIGN/METHODS: All preterm infants born under 30 weeks of GA were included. Serial 3T MR-imaging using a neonatal head-coil at DOL 3, 10 and at term equivalent age (TEA), using DTI and MP2RAGE sequences was performed. MP2RAGE generates a T1 map and allows calculating the relaxation time T1. Multiple measurements were performed for each exam in 12 defined white and grey matter ROIs.¦RESULTS: 16 patients were recruited: mean GA 27 2/7 w (191,2d SD±10,8), mean BW 999g (SD±265). 39 MRIs were realized (12 early: mean 4,83d±1,75, 13 late: mean 18,77d±8,05 and 14 at TEA: 88,91d±8,96). Measures of relaxation time T1 show a gradual and significant decrease over time (for ROI PLIC mean±SD in ms: 2100.53±102,75, 2116,5±41,55 and 1726,42±51,31 and for ROI central WM: 2302,25±79,02, 2315,02±115,02 and 1992,7±96,37 for early, late and TEA MR respectively). These trends are also observed in grey matter area, especially in thalamus. Measurements of ADC values show similar monotonous decrease over time.¦CONCLUSIONS: From these preliminary results, we conclude that quantitative MR imaging in very preterm infants is feasible. On the successive MP2RAGE and DTI sequences, we observe a gradual decrease over time in the described ROIs, representing the progressive maturation of the WM micro-structure and interestingly the same evolution is observed in the grey matter. We speculate that our study will provide normative values for T1map and ADC and might be a predictive factor for favourable or less favourable outcome.
Resumo:
This paper proposes a new time-domain test of a process being I(d), 0 < d = 1, under the null, against the alternative of being I(0) with deterministic components subject to structural breaks at known or unknown dates, with the goal of disentangling the existing identification issue between long-memory and structural breaks. Denoting by AB(t) the different types of structural breaks in the deterministic components of a time series considered by Perron (1989), the test statistic proposed here is based on the t-ratio (or the infimum of a sequence of t-ratios) of the estimated coefficient on yt-1 in an OLS regression of ?dyt on a simple transformation of the above-mentioned deterministic components and yt-1, possibly augmented by a suitable number of lags of ?dyt to account for serial correlation in the error terms. The case where d = 1 coincides with the Perron (1989) or the Zivot and Andrews (1992) approaches if the break date is known or unknown, respectively. The statistic is labelled as the SB-FDF (Structural Break-Fractional Dickey- Fuller) test, since it is based on the same principles as the well-known Dickey-Fuller unit root test. Both its asymptotic behavior and finite sample properties are analyzed, and two empirical applications are provided.
Resumo:
There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes the predominant attenuation mechanism at seismic frequencies. As a consequence, centimeter-scale perturbations of the subsurface physical properties should be taken into account for seismic modeling whenever detailed and accurate responses of the target structures are desired. This is, however, computationally prohibitive since extremely small grid spacings would be necessary. A convenient way to circumvent this problem is to use an upscaling procedure to replace the heterogeneous porous media by equivalent visco-elastic solids. In this work, we solve Biot's equations of motion to perform numerical simulations of seismic wave propagation through porous media containing mesoscopic heterogeneities. We then use an upscaling procedure to replace the heterogeneous poro-elastic regions by homogeneous equivalent visco-elastic solids and repeat the simulations using visco-elastic equations of motion. We find that, despite the equivalent attenuation behavior of the heterogeneous poro-elastic medium and the equivalent visco-elastic solid, the seismograms may differ due to diverging boundary conditions at fluid-solid interfaces, where there exist additional options for the poro-elastic case. In particular, we observe that the seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an interesting result, which has potentially important implications for wave-equation-based algorithms in exploration geophysics involving fluid-solid interfaces, such as, for example, wave field decomposition.
Resumo:
RESUME La méthode de la spectroscopie Raman est une technique d'analyse chimique basée sur l'exploitation du phénomène de diffusion de la lumière (light scattering). Ce phénomène fut observé pour la première fois en 1928 par Raman et Krishnan. Ces observations permirent à Raman d'obtenir le Prix Nobel en physique en 1930. L'application de la spectroscopie Raman a été entreprise pour l'analyse du colorant de fibres textiles en acrylique, en coton et en laine de couleurs bleue, rouge et noire. Nous avons ainsi pu confirmer que la technique est adaptée pour l'analyse in situ de traces de taille microscopique. De plus, elle peut être qualifiée de rapide, non destructive et ne nécessite aucune préparation particulière des échantillons. Cependant, le phénomène de la fluorescence s'est révélé être l'inconvénient le plus important. Lors de l'analyse des fibres, différentes conditions analytiques ont été testées et il est apparu qu'elles dépendaient surtout du laser choisi. Son potentiel pour la détection et l'identification des colorants imprégnés dans les fibres a été confirmé dans cette étude. Une banque de données spectrale comprenant soixante colorants de référence a été réalisée dans le but d'identifier le colorant principal imprégné dans les fibres collectées. De plus, l'analyse de différents blocs de couleur, caractérisés par des échantillons d'origine inconnue demandés à diverses personnes, a permis de diviser ces derniers en plusieurs groupes et d'évaluer la rareté des configurations des spectres Raman obtenus. La capacité de la technique Raman à différencier ces échantillons a été évaluée et comparée à celle des méthodes conventionnelles pour l'analyse des fibres textiles, à savoir la micro spectrophotométrie UV-Vis (MSP) et la chromatographie sur couche mince (CCM). La technique Raman s'est révélée être moins discriminatoire que la MSP pour tous les blocs de couleurs considérés. C'est pourquoi dans le cadre d'une séquence analytique nous recommandons l'utilisation du Raman après celle de la méthode d'analyse de la couleur, à partir d'un nombre de sources lasers le plus élevé possible. Finalement, la possibilité de disposer d'instruments équipés avec plusieurs longueurs d'onde d'excitation, outre leur pouvoir de réduire la fluorescence, permet l'exploitation d'un plus grand nombre d'échantillons. ABSTRACT Raman spectroscopy allows for the measurement of the inelastic scattering of light due to the vibrational modes of a molecule when irradiated by an intense monochromatic source such as a laser. Such a phenomenon was observed for the first time by Raman and Krishnan in 1928. For this observation, Raman was awarded with the Nobel Prize in Physics in 1930. The application of Raman spectroscopy has been undertaken for the dye analysis of textile fibers. Blue, black and red acrylics, cottons and wools were examined. The Raman technique presents advantages such as non-destructive nature, fast analysis time, and the possibility of performing microscopic in situ analyses. However, the problem of fluorescence was often encountered. Several aspects were investigated according to the best analytical conditions for every type/color fiber combination. The potential of the technique for the detection and identification of dyes was confirmed. A spectral database of 60 reference dyes was built to detect the main dyes used for the coloration of fiber samples. Particular attention was placed on the discriminating power of the technique. Based on the results from the Raman analysis for the different blocs of color submitted to analyses, it was possible to obtain different classes of fibers according to the general shape of spectra. The ability of Raman spectroscopy to differentiate samples was compared to the one of the conventional techniques used for the analysis of textile fibers, like UV-Vis Microspectrophotometry (UV-Vis MSP) and thin layer chromatography (TLC). The Raman technique resulted to be less discriminative than MSP for every bloc of color considered in this study. Thus, it is recommended to use Raman spectroscopy after MSP and light microscopy to be considered for an analytical sequence. It was shown that using several laser wavelengths allowed for the reduction of fluorescence and for the exploitation of a higher number of samples.
Resumo:
Traditional culture-dependent methods to quantify and identify airborne microorganisms are limited by factors such as short-duration sampling times and inability to count nonculturableor non-viable bacteria. Consequently, the quantitative assessment of bioaerosols is often underestimated. Use of the real-time quantitative polymerase chain reaction (Q-PCR) to quantify bacteria in environmental samples presents an alternative method, which should overcome this problem. The aim of this study was to evaluate the performance of a real-time Q-PCR assay as a simple and reliable way to quantify the airborne bacterial load within poultry houses and sewage treatment plants, in comparison with epifluorescencemicroscopy and culture-dependent methods. The estimates of bacterial load that we obtained from real-time PCR and epifluorescence methods, are comparable, however, our analysis of sewage treatment plants indicate these methods give values 270-290 fold greater than those obtained by the ''impaction on nutrient agar'' method. The culture-dependent method of air impaction on nutrient agar was also inadequate in poultry houses, as was the impinger-culture method, which gave a bacterial load estimate 32-fold lower than obtained by Q-PCR. Real-time quantitative PCR thus proves to be a reliable, discerning, and simple method that could be used to estimate airborne bacterial load in a broad variety of other environments expected to carry high numbers of airborne bacteria. [Authors]
Resumo:
Abstract This PhD thesis addresses the issue of alleviating the burden of developing ad hoc applications. Such applications have the particularity of running on mobile devices, communicating in a peer-to-peer manner and implement some proximity-based semantics. A typical example of such application can be a radar application where users see their avatar as well as the avatars of their friends on a map on their mobile phone. Such application become increasingly popular with the advent of the latest generation of mobile smart phones with their impressive computational power, their peer-to-peer communication capabilities and their location detection technology. Unfortunately, the existing programming support for such applications is limited, hence the need to address this issue in order to alleviate their development burden. This thesis specifically tackles this problem by providing several tools for application development support. First, it provides the location-based publish/subscribe service (LPSS), a communication abstraction, which elegantly captures recurrent communication issues and thus allows to dramatically reduce the code complexity. LPSS is implemented in a modular manner in order to be able to target two different network architectures. One pragmatic implementation is aimed at mainstream infrastructure-based mobile networks, where mobile devices can communicate through fixed antennas. The other fully decentralized implementation targets emerging mobile ad hoc networks (MANETs), where no fixed infrastructure is available and communication can only occur in a peer-to-peer fashion. For each of these architectures, various implementation strategies tailored for different application scenarios that can be parametrized at deployment time. Second, this thesis provides two location-based message diffusion protocols, namely 6Shot broadcast and 6Shot multicast, specifically aimed at MANETs and fine tuned to be used as building blocks for LPSS. Finally this thesis proposes Phomo, a phone motion testing tool that allows to test proximity semantics of ad hoc applications without having to move around with mobile devices. These different developing support tools have been packaged in a coherent middleware framework called Pervaho.