959 resultados para Average Case Complexity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thomas-Fermi theory is developed to evaluate nuclear matrix elements averaged on the energy shell, on the basis of independent particle Hamiltonians. One- and two-body matrix elements are compared with the quantal results, and it is demonstrated that the semiclassical matrix elements, as function of energy, well pass through the average of the scattered quantum values. For the one-body matrix elements it is shown how the Thomas-Fermi approach can be projected on good parity and also on good angular momentum. For the two-body case, the pairing matrix elements are considered explicitly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semiclassical theories such as the Thomas-Fermi and Wigner-Kirkwood methods give a good description of the smooth average part of the total energy of a Fermi gas in some external potential when the chemical potential is varied. However, in systems with a fixed number of particles N, these methods overbind the actual average of the quantum energy as N is varied. We describe a theory that accounts for this effect. Numerical illustrations are discussed for fermions trapped in a harmonic oscillator potential and in a hard-wall cavity, and for self-consistent calculations of atomic nuclei. In the latter case, the influence of deformations on the average behavior of the energy is also considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Games are powerful and engaging. On average, one billion people spend at least 1 hour a day playing computer and videogames. This is even more true with the younger generations. Our students have become the < digital natives >, the < gamers >, the < virtual generation >. Research shows that those who are most at risk for failure in the traditional classroom setting, also spend more time than their counterparts, using video games. They might strive, given a different learning environment. Educators have the responsibility to align their teaching style to these younger generation learning styles. However, many academics resist the use of computer-assisted learning that has been "created elsewhere". This can be extrapolated to game-based teaching: even if educational games were more widely authored, their adoption would still be limited to the educators who feel a match between the authored games and their own beliefs and practices. Consequently, game-based teaching would be much more widespread if teachers could develop their own games, or at least customize them. Yet, the development and customization of teaching games are complex and costly. This research uses a design science methodology, leveraging gamification techniques, active and cooperative learning theories, as well as immersive sandbox 3D virtual worlds, to develop a method which allows management instructors to transform any off-the-shelf case study into an engaging collaborative gamified experience. This method is applied to marketing case studies, and uses the sandbox virtual world of Second Life. -- Les jeux sont puissants et motivants, En moyenne, un milliard de personnes passent au moins 1 heure par jour jouer à des jeux vidéo sur ordinateur. Ceci se vérifie encore plus avec les jeunes générations, Nos étudiants sont nés à l'ère du numérique, certains les appellent des < gamers >, d'autres la < génération virtuelle >. Les études montrent que les élèves qui se trouvent en échec scolaire dans les salles de classes traditionnelles, passent aussi plus de temps que leurs homologues à jouer à des jeux vidéo. lls pourraient potentiellement briller, si on leur proposait un autre environnement d'apprentissage. Les enseignants ont la responsabilité d'adapter leur style d'enseignement aux styles d'apprentissage de ces jeunes générations. Toutefois, de nombreux professeurs résistent lorsqu'il s'agit d'utiliser des contenus d'apprentissage assisté par ordinateur, développés par d'autres. Ceci peut être extrapolé à l'enseignement par les jeux : même si un plus grand nombre de jeux éducatifs était créé, leur adoption se limiterait tout de même aux éducateurs qui perçoivent une bonne adéquation entre ces jeux et leurs propres convictions et pratiques. Par conséquent, I'enseignement par les jeux serait bien plus répandu si les enseignants pouvaient développer leurs propres jeux, ou au moins les customiser. Mais le développement de jeux pédagogiques est complexe et coûteux. Cette recherche utilise une méthodologie Design Science pour développer, en s'appuyant sur des techniques de ludification, sur les théories de pédagogie active et d'apprentissage coopératif, ainsi que sur les mondes virtuels immersifs < bac à sable > en 3D, une méthode qui permet aux enseignants et formateurs de management, de transformer n'importe quelle étude de cas, provenant par exemple d'une centrale de cas, en une expérience ludique, collaborative et motivante. Cette méthode est appliquée aux études de cas Marketing dans le monde virtuel de Second Life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we develop a new decision making model and apply it in political Surveys of economic climate collect opinions of managers about the short-term future evolution of their business. Interviews are carried out on a regular basis and responses measure optimistic, neutral or pessimistic views about the economic perspectives. We propose a method to evaluate the sampling error of the average opinion derived from a particular type of survey data. Our variance estimate is useful to interpret historical trends and to decide whether changes in the index from one period to another are due to a structural change or whether ups and downs can be attributed to sampling randomness. An illustration using real data from a survey of business managers opinions is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new model for dealing with decision making under risk by considering subjective and objective information in the same formulation is here presented. The uncertain probabilistic weighted average (UPWA) is also presented. Its main advantage is that it unifies the probability and the weighted average in the same formulation and considering the degree of importance that each case has in the analysis. Moreover, it is able to deal with uncertain environments represented in the form of interval numbers. We study some of its main properties and particular cases. The applicability of the UPWA is also studied and it is seen that it is very broad because all the previous studies that use the probability or the weighted average can be revised with this new approach. Focus is placed on a multi-person decision making problem regarding the selection of strategies by using the theory of expertons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: The aim of this study was to describe the demographic, social and medical characteristics, and healthcare use of highly frequent users of a university hospital emergency department (ED) in Switzerland. METHODS: A retrospective consecutive case series was performed. We included all highly frequent users, defined as patients attending the ED 12 times or more within a calendar year (1 January 2009 to 31 December 2009). We collected their characteristics and calculated a score of accumulation of risk factors of vulnerability. RESULTS: Highly frequent users comprised 0.1% of ED patients, and they accounted for 0.8% of all ED attendances (23 patients, 425 attendances). Of all highly frequent users, 87% had a primary care practitioner, 82.6% were unemployed, 73.9% were socially isolated, and 60.9% had a mental health or substance use primary diagnosis. One-third had attempted suicide during study period, all of them being women. They were often admitted (24.0% of attendances), and only 8.7% were uninsured. On average, they cumulated 3.3 different risk factors of vulnerability (SD 1.4). CONCLUSION: Highly frequent users of a Swiss academic ED are a highly vulnerable population. They are in poor health and accumulate several risk factors of being even in poorer health. The small number of patients and their high level of insurance coverage make it particularly feasible to design a specific intervention to approach their needs, in close collaboration with their primary care practitioner. Elaboration of the intervention should focus on social reinsertion and risk-reduction strategies with regard to substance use, hospital admissions and suicide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is currently a considerable diversity of quantitative measures available for summarizing the results in single-case studies. Given that the interpretation of some of them is difficult due to the lack of established benchmarks, the current paper proposes an approach for obtaining further numerical evidence on the importance of the results, complementing the substantive criteria, visual analysis, and primary summary measures. This additional evidence consists of obtaining the statistical significance of the outcome when referred to the corresponding sampling distribution. This sampling distribution is formed by the values of the outcomes (expressed as data nonoverlap, R-squared, etc.) in case the intervention is ineffective. The approach proposed here is intended to offer the outcome"s probability of being as extreme when there is no treatment effect without the need for some assumptions that cannot be checked with guarantees. Following this approach, researchers would compare their outcomes to reference values rather than constructing the sampling distributions themselves. The integration of single-case studies is problematic, when different metrics are used across primary studies and not all raw data are available. Via the approach for assigning p values it is possible to combine the results of similar studies regardless of the primary effect size indicator. The alternatives for combining probabilities are discussed in the context of single-case studies pointing out two potentially useful methods one based on a weighted average and the other on the binomial test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT: INTRODUCTION: Prospective epidemiologic studies have consistently shown that levels of circulating androgens in postmenopausal women are positively associated with breast cancer risk. However, data in premenopausal women are limited. METHODS: A case-control study nested within the New York University Women's Health Study was conducted. A total of 356 cases (276 invasive and 80 in situ) and 683 individually-matched controls were included. Matching variables included age and date, phase, and day of menstrual cycle at blood donation. Testosterone, androstenedione, dehydroandrosterone sulfate (DHEAS) and sex hormone-binding globulin (SHBG) were measured using direct immunoassays. Free testosterone was calculated. RESULTS: Premenopausal serum testosterone and free testosterone concentrations were positively associated with breast cancer risk. In models adjusted for known risk factors of breast cancer, the odds ratios for increasing quintiles of testosterone were 1.0 (reference), 1.5 (95% confidence interval (CI), 0.9 to 2.3), 1.2 (95% CI, 0.7 to 1.9), 1.4 (95% CI, 0.9 to 2.3) and 1.8 (95% CI, 1.1 to 2.9; Ptrend = 0.04), and for free testosterone were 1.0 (reference), 1.2 (95% CI, 0.7 to 1.8), 1.5 (95% CI, 0.9 to 2.3), 1.5 (95% CI, 0.9 to 2.3), and 1.8 (95% CI, 1.1 to 2.8, Ptrend = 0.01). A marginally significant positive association was observed with androstenedione (P = 0.07), but no association with DHEAS or SHBG. Results were consistent in analyses stratified by tumor type (invasive, in situ), estrogen receptor status, age at blood donation, and menopausal status at diagnosis. Intra-class correlation coefficients for samples collected from 0.8 to 5.3 years apart (median 2 years) in 138 cases and 268 controls were greater than 0.7 for all biomarkers except for androstenedione (0.57 in controls). CONCLUSIONS: Premenopausal concentrations of testosterone and free testosterone are associated with breast cancer risk. Testosterone and free testosterone measurements are also highly reliable (that is, a single measurement is reflective of a woman's average level over time). Results from other prospective studies are consistent with our results. The impact of including testosterone or free testosterone in breast cancer risk prediction models for women between the ages of 40 and 50 years should be assessed. Improving risk prediction models for this age group could help decision making regarding both screening and chemoprevention of breast cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman päätavoitteena oli analysoida, mitkä tekijät vaikuttavat pankin luottoluokituksissa laadullisten tekijöiden arviointiin ja mikävaikutus näillä tekijöillä on luottoluokitukseen. Tutkimuksessa myös arvioitiinlaadullisten tekijöiden vaikutusta pankin pääomavaatimukseen. Tutkimuksessa olimu-kana pankin 408 yritysasiakasta. Tutkimuksessa käytettiin tilastollista tut-kimusmenetelmää. Baselin pankkivalvontakomitean asettamat standardit ja rahoitusalan journaaleissa julkaistut tutkimukset muodostivat tutkimuk-sen teoreettisen viitekehyksen. Tutkimusten pohjalta asetettiin tutkimus-hypoteesit. Tutkimustulokset osoittivat, että laadullisten tekijöiden ja taloudellisten te-kijöiden arvioinnin erotukseen eivät vaikuta merkittävästi yrityksen ikä, ko-ko eikä asiakassuhteen kesto. Myöskään toimialalla ei ollut vaikutusta. Konttoreittain laadullisten tekijöiden arvioinnissa oli eroja. Laadullisia teki-jöitäarvioidaan keskimäärin positiivisemmin kuin taloudellisia tekijöitä mut-ta erotolivat koko otantajoukossa hyvin pieniä. Luottoluokitukset nousevat hieman laadullisten tekijöiden vaikutuksesta, jolloin tällä on pääomavaati-musta laskeva vaikutus. Laadullisten ja taloudellisten tekijöiden arvioinnin eroon voi kuitenkinolla olemassa perustellut syyt. Tutkimustulosten perusteellavoidaan todeta, että laadullisten tekijöiden arviointiin eivät vaikuta yksittäiset yrityskohtaiset tekijät. Luokittelijan sub-jektiivisten tekijöiden lisäksi taustalla saattaa vaikuttaa myös monet muut tekijät, joita ei ole tässä tutkimuksessa tutkittu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoitteena oli selvittää case-organisaatio Stora Enson Anjalan paperitehtaan uudistumiskyky, toiminnan strategianmukaisuus sekä vahvuudet ja kehitettävät osa-alueetuudistumiskyvyssä. Tutkielmassa käytettiin kvantitatiivista tutkimusotetta. Aineisto kerättiin kahdella eri mittarilla: KM-Factor¿ ¿ uudistumiskyvyn mittarilla ja organisaation muutoskyvyn mittarilla. KM-Factor¿ ¿mittarin avulla kerättiin tietoa organisaation suhteiden, tiedonkulun, osaamisen ja johtamisen nyky- ja tavoitetilasta erilaisissa tietoympäristöissä. Muutoskyvyn mittarin avulla kerättiin tietoa organisaation uudistumiskyvystä kuuden sitä kuvaavan ominaisuuden, strategisen kyvykkyyden, johtajuuden, ajan hyödyntämisen, verkostoituneisuuden, tiedon johtamisen ja oppimissuuntautuneisuuden perusteella. Mittaukset suoritettiin helmikuussa 2006 ja niiden kohteena oli Stora Enson Anjalan paperitehtaan koko henkilöstö. Kahden uudistumiskyvyn mittarin tulokset analysoitiin ja niitä vertailtiin keskenään. Tällä tavalla pyrittiinmyös lisäämään tutkimuksen validiteettia. Mittaustulokset purettiin sanalliseenmuotoon ja niistä raportoitiin case-organisaatiolle. Mittausten mukaan case-organisaation uudistumiskyky oli hieman keskitasoa heikompi. Toiminnan vahvuuksia olivat yksimielisyys muutostarpeista ja tilanneherkkyys. Merkittävimmät toiminnan heikkoudet löytyivät nykyisestä toimintatavasta, joka ei ollut strategian mukainen, ja innovaatiopotentiaalista, joka oli hyvin alhainen. Kehittämisen tarpeita löytyi sekä johdon että henkilöstön suhteiden ja yhteistyön osalta. Koko organisaation tulisi lisätä yhteistyötä ulkopuolisten tahojen ja sidosryhmien kanssa. Organisaatiossa tulisi miettiä, miten strategia saataisiin selkeästi viestittyä myös johtoa alemmille tasoille. Johto painotti liikaa autoritääristä johtamista, kun taas henkilöstöltä odotettaisiin enemmän vastuunottoa toiminnastaan. Organisaatiossa tulisi saada nykyistä avoimemmin ja itsenäisemmin ilmaista omia näkemyksiään, toimia aloitteellisesti ja luoda uusia ideoita toiminnan kehittämiseksi. Innovatiivisuudesta ja parannusehdotuksista tulisi myös palkita paremmin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä tutkielmassatarkastellaan maakaasun hinnoittelussa käytettyjen sidonnaisuustekijöiden hintadynamiikkaa ja niiden vaikutusta maakaasun hinnanmuodostukseen. Pääasiallisena tavoitteena on arvioida eri aikasarjamenetelmien soveltuvuutta sidonnaisuustekijöiden ennustamisessa. Tämä toteutettiin analysoimalla eri mallien ja menetelmien ominaisuuksia sekä yhteen sovittamalla nämä eri energiamuotojen hinnanmuodostuksen erityispiirteisiin. Tutkielmassa käytetty lähdeaineisto on saatu Gasum Oy:n tietokannasta. Maakaasun hinnoittelussa käytetään kolmea sidonnaisuustekijää seuraavilla painoarvoilla: raskaspolttoöljy 50%, indeksi E40 30% ja kivihiili 20%. Kivihiilen ja raskaan polttoöljyn hinta-aineisto koostuu verottomista dollarimääräisistä kuukausittaisista keskiarvoista periodilta 1.1.1997 - 31.10.2004. Kotimarkkinoiden perushintaindeksin alaindeksin E40 indeksi-aineisto, joka kuvaa energian tuottajahinnan kehitystä Suomessa ja koostuu tilastokeskuksen julkaisemista kuukausittaisista arvoista periodilta 1.1.2000 - 31.10.2004. Tutkimuksessa tarkasteltujen mallien ennustuskyky osoittautui heikoksi. Kuitenkin tuloksien perusteella voidaan todeta, että lyhyellä aikavälillä EWMA-malli antoi harhattomimman ennusteen. Muut testatuista malleista eivät kyenneet antamaan riittävän luotettavia ja tarkkoja ennusteita. Perinteinen aikasarja-analyysi kykeni tunnistamaan aikasarjojen kausivaihtelut sekä trendit. Lisäksi liukuvan keskiarvon menetelmä osoittautui jossain määrin käyttökelpoiseksi aikasarjojen lyhyen aikavälin trendien identifioinnissa.