948 resultados para semi-empirical methods
Resumo:
BACKGROUND: Several analysis software packages for myocardial blood flow (MBF) quantification from cardiac PET studies exist, but they have not been compared using concordance analysis, which can characterize precision and bias separately. Reproducible measurements are needed for quantification to fully develop its clinical potential. METHODS: Fifty-one patients underwent dynamic Rb-82 PET at rest and during adenosine stress. Data were processed with PMOD and FlowQuant (Lortie model). MBF and myocardial flow reserve (MFR) polar maps were quantified and analyzed using a 17-segment model. Comparisons used Pearson's correlation ρ (measuring precision), Bland and Altman limit-of-agreement and Lin's concordance correlation ρc = ρ·C b (C b measuring systematic bias). RESULTS: Lin's concordance and Pearson's correlation values were very similar, suggesting no systematic bias between software packages with an excellent precision ρ for MBF (ρ = 0.97, ρc = 0.96, C b = 0.99) and good precision for MFR (ρ = 0.83, ρc = 0.76, C b = 0.92). On a per-segment basis, no mean bias was observed on Bland-Altman plots, although PMOD provided slightly higher values than FlowQuant at higher MBF and MFR values (P < .0001). CONCLUSIONS: Concordance between software packages was excellent for MBF and MFR, despite higher values by PMOD at higher MBF values. Both software packages can be used interchangeably for quantification in daily practice of Rb-82 cardiac PET.
Resumo:
Recently, there has been considerable interest in solving viscoelastic problems in 3D particularly with the improvement in modern computing power. In many applications the emphasis has been on economical algorithms which can cope with the extra complexity that the third dimension brings. Storage and computer time are of the essence. The advantage of the finite volume formulation is that a large amount of memory space is not required. Iterative methods rather than direct methods can be used to solve the resulting linear systems efficiently.
Resumo:
Käytettävien ohjelmistojen suunnittelu tuo hyötyjä loppukäyttäjälle sekä muille sidosryhmille. Verkkokaupassa käytettävyys on elintärkeää, koska asiakkaat vaihtavat helposti seuraavalle sivustolle, mikäli he eivät löydä etsimäänsä. Tutkimusten mukaan käytettävyys vaikuttaa ostopäätöksen tekemiseen. Lisäksi käytettävyydellä on merkitystä asiakastyytyväisyyteen, joka taas vaikuttaa asiakasuskollisuuteen. Tässä tutkielmassa tutkittiin, miten käytettävyyttä suunnitellaan käytännössä verrattuna teoreettisiin suosituksiin. Tapaustutkimuksen kohteena oli huonekaluja myyvän kansainvälisen yrityksen verkkokaupan uudistamiseen tähtäävä projekti. Uudistamistarve nousi aikaisemman verkkokauppaversion puutteellisesta käytettävyydestä. Projekti toteutettiin ketterällä Scrum-menetelmällä. Empiirinen aineisto kerättiin puolistrukturoitujen haastattelujen avulla. Haastateltavat olivat käyttökokemuksen suunnitteluun osallistuvia henkilöitä. Haastattelujen teemat laadittiin teoreettisen aineiston pohjalta. Teoreettisessa osuudessa tutkittiin käytettävyyden suunnitteluun liittyviä periaatteita, prosessia ja menetelmiä. Aikaisemmasta tutkimuksesta löydettiin 12 periaatetta, jotka tukevat ja luonnehtivat käyttäjäkeskeistä suunnittelua. Käytettävyyttä suunnitellaan käyttäjäkeskeisen prosessin avulla. Eri prosessimallit pitivät keskeisinä asioina käyttökontekstin määrittelyä ja ymmärtämistä, mitattavia käytettävyysvaatimuksia, suunnitteluratkaisujen empiiristä arviointia sekä suunnitteluprosessin iteratiivisuutta. Lisäksi tarkasteltiin, mitä suunnittelumenetelmiä tutkijat ehdottavat käytettävyyden suunnitteluun ja mitä kyselytutkimusten perusteella todellisuudessa käytetään. Verkkokauppaprojektissa käytettävyyden suunnittelu erosi osittain teoreettisista suosituksista. Käyttökontekstitietoa ei ollut kaikilla projektiin osallistuvilla, eikä käytettävyysvaatimuksia ollut asetettu teorian tarkoittamalla tavalla. Yhtäläisyyksiäkin löytyi. Verkkokauppaprojektissa suunnitteluratkaisuja arvioitiin empiirisesti todellisten käyttäjien edustajien avulla. Suunnitteluprosessi oli iteratiivinen eli suunnitteluratkaisuja oltiin valmiita muuttamaan arvioinnin tuloksena. Tutkimuksen perusteella suositellaan, että verkkokauppaprojektissa parannettaisiin kommunikointia, koska käyttökontekstitieto ei saavuttanut kaikkia projektissa työskenteleviä. Teorian tulisi entisestään korostaa kommunikoinnin tärkeyttä. Tutkimuksen perusteella esitetään myös, että teoria ohjaisi paremmin vaatimusmäärittelyjen tekemiseen käytännössä. Avainsanat: Käytettävyys, käyttäjäkeskeinen suunnittelu, käytettävyyden periaatteet, käytettävyyden suunnittelumenetelmät, ketterä ohjelmistokehitys, tapaustutkimus
Resumo:
SILVA, João B. da et al. Estado Nutricional de Escolares do Semi-Árido do Nordeste Brasileiro. Revista de Salud Pública, v. 11, n. 1, p. 62-71, 2009.
Resumo:
Most economic transactions nowadays are due to the effective exchange of information in which digital resources play a huge role. New actors are coming into existence all the time, so organizations are facing difficulties in keeping their current customers and attracting new customer segments and markets. Companies are trying to find the key to their success and creating superior customer value seems to be one solution. Digital technologies can be used to deliver value to customers in ways that extend customers’ normal conscious experiences in the context of time and space. By creating customer value, companies can gain the increased loyalty of existing customers and better ways to serve new customers effectively. Based on these assumptions, the objective of this study was to design a framework to enable organizations to create customer value in digital business. The research was carried out as a literature review and an empirical study, which consisted of a web-based survey and semi-structured interviews. The data from the empirical study was analyzed as mixed research with qualitative and quantitative methods. These methods were used since the object of the study was to gain deeper understanding about an existing phenomena. Therefore, the study used statistical procedures and value creation is described as a phenomenon. The framework was designed first based on the literature and updated based on the findings from the empirical study. As a result, relationship, understanding the customer, focusing on the core product or service, the product or service quality, incremental innovations, service range, corporate identity, and networks were chosen as the top elements of customer value creation. Measures for these elements were identified. With the measures, companies can manage the elements in value creation when dealing with present and future customers and also manage the operations of the company. In conclusion, creating customer value requires understanding the customer and a lot of information sharing, which can be eased by digital resources. Understanding the customer helps to produce products and services that fulfill customers’ needs and desires. This could result in increased sales and make it easier to establish efficient processes.
Resumo:
Financial constraints influence corporate policies of firms, including both investment decisions and external financing policies. The relevance of this phenomenon has become more pronounced during and after the recent financial crisis in 2007/2008. In addition to raising costs of external financing, the effects of financial crisis limited the availability of external financing which had implications for employment, investment, sale of assets, and tech spending. This thesis provides a comprehensive analysis of the effects of financial constraints on share issuance and repurchases decisions. Financial constraints comprise both internal constraints reflecting the demand for external financing and external financial constraints that relate to the supply of external financing. The study also examines both operating performance and stock market reactions associated with equity issuance methods. The first empirical chapter explores the simultaneous effects of financial constraints and market timing on share issuance decisions. Internal financing constraints limit firms’ ability to issue overvalued equity. On the other hand, financial crisis and low market liquidity (external financial constraints) restrict availability of equity financing and consequently increase the costs of external financing. Therefore, the study explores the extent to which internal and external financing constraints limit market timing of equity issues. This study finds that financial constraints play a significant role in whether firms time their equity issues when the shares are overvalued. The conclusion is that financially constrained firms issue overvalued equity when the external equity market or the general economic conditions are favourable. During recessionary periods, costs of external finance increase such that financially constrained firms are less likely to issue overvalued equity. Only unconstrained firms are more likely to issue overvalued equity even during crisis. Similarly, small firms that need cash flows to finance growth projects are less likely to access external equity financing during period of significant economic recessions. Moreover, constrained firms have low average stock returns compared to unconstrained firms, especially when they issue overvalued equity. The second chapter examines the operating performance and stock returns associated with equity issuance methods. Firms in the UK can issue equity through rights issues, open offers, and private placement. This study argues that alternative equity issuance methods are associated with a different level of operating performance and long-term stock returns. Firms using private placement are associated with poor operating performance. However, rights issues are found empirically to be associated with higher operating performance and less negative long-term stock returns after issuance in comparison to counterpart firms that issue private placements and open offers. Thus, rights issuing firms perform better than open offers and private placement because the favourable operating performance at the time of issuance generates subsequent positive long-run stock price response. Right issuing firms are of better quality and outperform firms that adopt open offers and private placement. In the third empirical chapter, the study explores the levered share repurchase of internally financially unconstrained firms. Unconstrained firms are expected to repurchase their shares using internal funds rather than through external borrowings. However, evidence shows that levered share repurchases are common among unconstrained firms. These firms display this repurchase behaviour when they have bond ratings or investment grade ratings that allow them to obtain cheap external debt financing. It is found that internally financially unconstrained firms borrow to finance their share repurchase when they invest more. Levered repurchase firms are associated with less positive abnormal returns than unlevered repurchase firms. For the levered repurchase sample, high investing firms are associated with more positive long-run abnormal stock returns than low investing firms. It appears the market underreact to the levered repurchase in the short-run regardless of the level of investments. These findings indicate that market reactions reflect both undervaluation and signaling hypotheses of positive information associated with share repurchase. As the firms undertake capital investments, they generate future cash flows, limit the effects of leverage on financial distress and ultimately reduce the risk of the equity capital.
Resumo:
BACKGROUND: Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. METHODS: To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. RESULTS: Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. CONCLUSIONS: If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.
Resumo:
This work has as its main purpose to investigate the contribution of supply chain management in order to obtain competitive advantage by companies from the textile industry and from Ceará footwear industry, focusing its analysis mainly in the interorganizational relations (dyadic). For this, the theoretical referential contemplates different explanatory streams of the competitive advantage, detaching the relational perception of the resources theory, as well as, the main presuppositions of the supply chain management which culminates with the development of an analysis sample that runs the empirical study; the one which considers an expanded purpose of the supply chain which includes the government and the abetment institutions as institutional environment representatives. Besides supply chain management consideration as a competitive advantage source, the work also tried to identify other possible competitive advantage sources for the companies of the investigated sectors. It represents a study of multiple interpretive cases, having four cases as a total; meaning two cases in each one of the sectors, which used as a primary data collecting instrument a semi-structured interview schedule. Different methods were used for the data analysis, the content analysis and the constant comparison methods, the analytical procedure originated from the grounded theory research strategy, which were applied the Atlas/ti software recourse. Considering the theoretical referential and the used analysis sample, four basic categories of the work were defined, including its respective proprieties and dimensions: (1) characteristics concerning to the relationship with the supplier; (2) the company relations with the government; (3) the company relations with the abetment institutions and; (4) obtaining sources of competitive advantage. In general, the applied research in the footwear sector revealed that in the relationships of the researched companies related to its suppliers, there is a predominance of the partnership system and the main presuppositions of the supply chain management are applied which contributes for the acquisition of the relational competitive advantage; while in the textile sector, only some of these presuppositions are applied, with little contribution for the relational competitive advantage. The main resource which was accessed by the companies in both sectors through its relationships with the government and the abetment institutions are the tax incentives which, for the footwear companies, contribute for the acquisition of the temporary competitive advantage in relation to the contestants who do not own productive installations in the Northeast region, it also conducts to a competitive parity situation in relation to the contestants who own productive installations in the Northeast region and to the external market contestants; while for the companies of the textile sector, the tax incentives run the companies to a competitive parity situation in relation to its contestants. Furthermore, the investigated companies from the two sectors possess acquisition sources of the competitive advantage which collimate with different explanatory streams (industrial analysis, resources theory, Austrian school and the dynamic capabilities theory), although there is a predominance of the product innovation as a competitive advantage source in both sectors, due to the bond of these with the fashion tendencies
Resumo:
In this thesis, we propose several advances in the numerical and computational algorithms that are used to determine tomographic estimates of physical parameters in the solar corona. We focus on methods for both global dynamic estimation of the coronal electron density and estimation of local transient phenomena, such as coronal mass ejections, from empirical observations acquired by instruments onboard the STEREO spacecraft. We present a first look at tomographic reconstructions of the solar corona from multiple points-of-view, which motivates the developments in this thesis. In particular, we propose a method for linear equality constrained state estimation that leads toward more physical global dynamic solar tomography estimates. We also present a formulation of the local static estimation problem, i.e., the tomographic estimation of local events and structures like coronal mass ejections, that couples the tomographic imaging problem to a phase field based level set method. This formulation will render feasible the 3D tomography of coronal mass ejections from limited observations. Finally, we develop a scalable algorithm for ray tracing dense meshes, which allows efficient computation of many of the tomographic projection matrices needed for the applications in this thesis.
Resumo:
Background: The evidence base on end-of-life care in acute stroke is limited, particularly with regard to recognising dying and related decision-making. There is also limited evidence to support the use of end-of-life care pathways (standardised care plans) for patients who are dying after stroke. Aim: This study aimed to explore the clinical decision-making involved in placing patients on an end-of-life care pathway, evaluate predictors of care pathway use, and investigate the role of families in decision-making. The study also aimed to examine experiences of end-of-life care pathway use for stroke patients, their relatives and the multi-disciplinary health care team. Methods: A mixed methods design was adopted. Data were collected in four Scottish acute stroke units. Case-notes were identified prospectively from 100 consecutive stroke deaths and reviewed. Multivariate analysis was performed on case-note data. Semi-structured interviews were conducted with 17 relatives of stroke decedents and 23 healthcare professionals, using a modified grounded theory approach to collect and analyse data. The VOICES survey tool was also administered to the bereaved relatives and data were analysed using descriptive statistics and thematic analysis of free-text responses. Results: Relatives often played an important role in influencing aspects of end-of-life care, including decisions to use an end-of-life care pathway. Some relatives experienced enduring distress with their perceived responsibility for care decisions. Relatives felt unprepared for and were distressed by prolonged dying processes, which were often associated with severe dysphagia. Pro-active information-giving by staff was reported as supportive by relatives. Healthcare professionals generally avoided discussing place of care with families. Decisions to use an end-of-life care pathway were not predicted by patients’ demographic characteristics; decisions were generally made in consultation with families and the extended health care team, and were made within regular working hours. Conclusion: Distressing stroke-related issues were more prominent in participants’ accounts than concerns with the end-of-life care pathway used. Relatives sometimes perceived themselves as responsible for important clinical decisions. Witnessing prolonged dying processes was difficult for healthcare professionals and families, particularly in relation to the management of persistent major swallowing difficulties.
Resumo:
Background and Purpose: At least part of the failure in the transition from experimental to clinical studies in stroke has been attributed to the imprecision introduced by problems in the design of experimental stroke studies. Using a metaepidemiologic approach, we addressed the effect of randomization, blinding, and use of comorbid animals on the estimate of how effectively therapeutic interventions reduce infarct size. Methods: Electronic and manual searches were performed to identify meta-analyses that described interventions in experimental stroke. For each meta-analysis thus identified, a reanalysis was conducted to estimate the impact of various quality items on the estimate of efficacy, and these estimates were combined in a meta meta-analysis to obtain a summary measure of the impact of the various design characteristics. Results: Thirteen meta-analyses that described outcomes in 15 635 animals were included. Studies that included unblinded induction of ischemia reported effect sizes 13.1% (95% CI, 26.4% to 0.2%) greater than studies that included blinding, and studies that included healthy animals instead of animals with comorbidities overstated the effect size by 11.5% (95% CI, 21.2% to 1.8%). No significant effect was found for randomization, blinded outcome assessment, or high aggregate CAMARADES quality score. Conclusions: We provide empirical evidence of bias in the design of studies, with studies that included unblinded induction of ischemia or healthy animals overestimating the effectiveness of the intervention. This bias could account for the failure in the transition from bench to bedside of stroke therapies.
Resumo:
Les petites molécules de type p à bandes interdites étroites sont de plus en plus perçues comme des remplaçantes possibles aux polymères semi-conducteurs actuellement utilisés conjointement avec des dérivés de fullerènes de type n, dans les cellules photovoltaïques organiques (OPV). Par contre, ces petites molécules tendent à cristalliser facilement lors de leur application en couches minces et forment difficilement des films homogènes appropriés. Des dispositifs OPV de type hétérojonction de masse ont été réalisés en ajoutant différentes espèces de polymères semi-conducteurs ou isolants, agissant comme matrices permettant de rectifier les inhomogénéités des films actifs et d’augmenter les performances des cellules photovoltaïques. Des polymères aux masses molaires spécifiques ont été synthétisés par réaction de Wittig en contrôlant précisément les ratios molaires des monomères et de la base utilisée. L’effet de la variation des masses molaires en fonction des morphologies de films minces obtenus et des performances des diodes organiques électroluminescentes reliées, a également été étudié. La microscopie électronique en transmission (MET) ou à balayage (MEB) a été employée en complément de la microscopie à force atomique (AFM) pour suivre l’évolution de la morphologie des films organiques minces. Une nouvelle méthode rapide de préparation des films pour l’imagerie MET sur substrats de silicium est également présentée et comparée à d’autres méthodes d’extraction. Motivé par le prix élevé et la rareté des métaux utilisés dans les substrats d’oxyde d’indium dopé à l’étain (ITO), le développement d’une nouvelle méthode de recyclage eco-responsable des substrats utilisés dans ces études est également présenté.
Resumo:
International audience
Resumo:
Sequential panel selection methods (spsms — procedures that sequentially use conventional panel unit root tests to identify I(0)I(0) time series in panels) are increasingly used in the empirical literature. We check the reliability of spsms by using Monte Carlo simulations based on generating directly the individual asymptotic pp values to be combined into the panel unit root tests, in this way isolating the classification abilities of the procedures from the small sample properties of the underlying univariate unit root tests. The simulations consider both independent and cross-dependent individual test statistics. Results suggest that spsms may offer advantages over time series tests only under special conditions.