940 resultados para Search data
Resumo:
Processing efficiency theory predicts that anxiety reduces the processing capacity of working memory and has detrimental effects on performance. When tasks place little demand on working memory, the negative effects of anxiety can be avoided by increasing effort. Although performance efficiency decreases, there is no change in performance effectiveness. When tasks impose a heavy demand on working memory, however, anxiety leads to decrements in efficiency and effectiveness. These presumptions were tested using a modified table tennis task that placed low (LWM) and high (HWM) demands on working memory. Cognitive anxiety was manipulated through a competitive ranking structure and prize money. Participants' accuracy in hitting concentric circle targets in predetermined sequences was taken as a measure of performance effectiveness, while probe reaction time (PRT), perceived mental effort (RSME), visual search data, and arm kinematics were recorded as measures of efficiency. Anxiety had a negative effect on performance effectiveness in both LWM and HWM tasks. There was an increase in frequency of gaze and in PRT and RSME values in both tasks under high vs. low anxiety conditions, implying decrements in performance efficiency. However, participants spent more time tracking the ball in the HWM task and employed a shorter tau margin when anxious. Although anxiety impaired performance effectiveness and efficiency, decrements in efficiency were more pronounced in the HWM task than in the LWM task, providing support for processing efficiency theory.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
"Lecture notes in computer science series, ISSN 0302-9743, vol. 9273"
Resumo:
En aquest treball es realitza una anàlisi bibliomètrica i historiogràfica dels articles publicats a la revista PAPERS DE SOCIOLOGIA entre els anys 1987 i 2001, per tal de buscar-hi dades que ens permetin endinsar-nos en la realitat de la investigació sociològica que es duu a terme al nostre país, posant un èmfasi especial en les xarxes de col·laboració que s’estableixen entre els autors que publiquen les seves investigacions en aquesta revista.
Resumo:
O racional teórico das finanças comportamentais se sustenta em dois grandes pilares: limites de arbitragem e irracionalidade dos investidores. Dentre os desvios de racionalidade conhecidos, um foi de particular interesse para este estudo: o viés da disponibilidade. Este viés acontece nas situações em que as pessoas estimam a frequência de uma classe ou a probabilidade de um evento pela facilidade com que instâncias ou ocorrências podem ser lembradas. O advento da internet permitiu a verificação do viés de disponibilidade em larga escala por meio da análise dos dados de buscas realizadas. I.e., se uma determinada ação é mais procurada que outras, podemos inferir que ela está mais disponível na memória coletiva dos investidores. Por outro lado, a literatura das finanças comportamentais tem um braço mais pragmático, que estuda estratégias capazes de fornecer retornos anormais, acima do esperado pela hipótese do mercado eficiente. Para os fins deste estudo, destaca-se o efeito momento, no qual o grupo de ações de melhor resultado nos últimos J meses tende a fornecer melhores resultados pelos próximos K meses. O propósito deste estudo foi verificar a possibilidade de se obter retornos acima dos identificados pelo efeito momento segmentando-se as carteiras de maior e menor viés de disponibilidade. Os resultados obtidos foram positivos e estatisticamente significativos na amostra selecionada. A estratégia cruzada entre efeito momento e disponibilidade produziu, para J=6 e K=6, retornos médios mensais de 2,82% com estatística t de 3,14. Já a estratégia só de efeito momento, para o mesmo período de formação e prazo de manutenção, gerou retornos médios mensais de apenas 1,40% com estatística t de 1,22.
Resumo:
PURPOSE To identify the influence of fixed prosthesis type on biologic and technical complication rates in the context of screw versus cement retention. Furthermore, a multivariate analysis was conducted to determine which factors, when considered together, influence the complication and failure rates of fixed implant-supported prostheses. MATERIALS AND METHODS Electronic searches of MEDLINE (PubMed), EMBASE, and the Cochrane Library were conducted. Selected inclusion and exclusion criteria were used to limit the search. Data were analyzed statistically with simple and multivariate random-effects Poisson regressions. RESULTS Seventy-three articles qualified for inclusion in the study. Screw-retained prostheses showed a tendency toward and significantly more technical complications than cemented prostheses with single crowns and fixed partial prostheses, respectively. Resin chipping and ceramic veneer chipping had high mean event rates, at 10.04 and 8.95 per 100 years, respectively, for full-arch screwed prostheses. For "all fixed prostheses" (prosthesis type not reported or not known), significantly fewer biologic and technical complications were seen with screw retention. Multivariate analysis revealed a significantly greater incidence of technical complications with cemented prostheses. Full-arch prostheses, cantilevered prostheses, and "all fixed prostheses" had significantly higher complication rates than single crowns. A significantly greater incidence of technical and biologic complications was seen with cemented prostheses. CONCLUSION Screw-retained fixed partial prostheses demonstrated a significantly higher rate of technical complications and screw-retained full-arch prostheses demonstrated a notably high rate of veneer chipping. When "all fixed prostheses" were considered, significantly higher rates of technical and biologic complications were seen for cement-retained prostheses. Multivariate Poisson regression analysis failed to show a significant difference between screw- and cement-retained prostheses with respect to the incidence of failure but demonstrated a higher rate of technical and biologic complications for cement-retained prostheses. The incidence of technical complications was more dependent upon prosthesis and retention type than prosthesis or abutment material.
Resumo:
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Resumo:
A search is performed for Higgs bosons produced in association with top quarks using the diphoton decay mode of the Higgs boson. Selection requirements are optimized separately for leptonic and fully hadronic final states from the top quark decays. The dataset used corresponds to an integrated luminosity of 4.5 fb−1 of proton--proton collisions at a center-of-mass energy of 7 TeV and 20.3 fb−1 at 8 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. No significant excess over the background prediction is observed and upper limits are set on the tt¯H production cross section. The observed exclusion upper limit at 95% confidence level is 6.7 times the predicted Standard Model cross section value. In addition, limits are set on the strength of the Yukawa coupling between the top quark and the Higgs boson, taking into account the dependence of the tt¯H and tH cross sections as well as the H→γγ branching fraction on the Yukawa coupling. Lower and upper limits at 95% confidence level are set at −1.3 and +8.0 times the Yukawa coupling strength in the Standard Model.
Resumo:
Searches are performed for resonant and non-resonant Higgs boson pair production in the hh→γγbb¯ final state using 20 fb−1 of proton--proton collisions at a center-of-mass energy of 8TeV recorded with the ATLAS detector at the CERN Large Hadron Collider. A 95% confidence level upper limit on the cross section times branching ratio of non--resonant production is set at 2.2 pb, while the expected limit is 1.0 pb. The corresponding limit observed for a narrow resonance ranges between 0.8 and 3.5 pb as a function of its mass.
Resumo:
Dijet events produced in LHC proton--proton collisions at a center-of-mass energy s√=8 TeV are studied with the ATLAS detector using the full 2012 data set, with an integrated luminosity of 20.3 fb−1. Dijet masses up to about 4.5 TeV are probed. No resonance-like features are observed in the dijet mass spectrum. Limits on the cross section times acceptance are set at the 95% credibility level for various hypotheses of new phenomena in terms of mass or energy scale, as appropriate. This analysis excludes excited quarks with a mass below 4.09 TeV, color-octet scalars with a mass below 2.72 TeV, heavy W′ bosons with a mass below 2.45 TeV, chiral W∗ bosons with a mass below 1.75 TeV, and quantum black holes with six extra space-time dimensions with threshold mass below 5.82 TeV.
Resumo:
The results of a search for charged Higgs bosons decaying to a τ lepton and a neutrino, H±→τ±ν, are presented. The analysis is based on 19.5 fb−1 of proton--proton collision data at s√=8 TeV collected by the ATLAS experiment at the Large Hadron Collider. Charged Higgs bosons are searched for in events consistent with top-quark pair production or in associated production with a top quark. The final state is characterised by the presence of a hadronic τ decay, missing transverse momentum, b-tagged jets, a hadronically decaying W boson, and the absence of any isolated electrons or muons with high transverse momenta. The data are consistent with the expected background from Standard Model processes. A statistical analysis leads to 95% confidence-level upper limits on the product of branching ratios B(t→bH±)×B(H±→τ±ν), between 0.23% and 1.3% for charged Higgs boson masses in the range 80--160 GeV. It also leads to 95% confidence-level upper limits on the production cross section times branching ratio, σ(pp→tH±+X)×B(H±→τ±ν), between 0.76 pb and 4.5 fb, for charged Higgs boson masses ranging from 180 GeV to 1000 GeV. In the context of different scenarios of the Minimal Supersymmetric Standard Model, these results exclude nearly all values of tanβ above one for charged Higgs boson masses between 80 GeV and 160 GeV, and exclude a region of parameter space with high tanβ for H± masses between 200 GeV and 250 GeV.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman-Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.
Resumo:
La question de la protection des données à caractère personnel posée dans le cadre des activités d’assistance et de soutien des missions civiles de gestion de crise ne semble guère avoir suscité l’intérêt des instances en charge de leur gestion et ce en dépit de son importance majeure au regard des tâches exécutées quotidiennement par les agents de ces missions dans le domaine de la coopération policière et judiciaire en matière pénale. S’appuyant sur une expérience de terrain, l’auteur s’efforcera, dans ces lignes, de démontrer la nécessité d’entamer une réflexion de fond sur ce sujet afin, le cas échéant, de prendre les initiatives utiles destinées à porter remède aux difficultés qui, en ce domaine, pourraient apparaître.