397 resultados para analysts
Resumo:
A área da “política e administração da saúde”, tem merecido um interesse crescente nas últimas décadas. Provavelmente em consequência do substancial aumento das despesas de saúde que tem ocorrido em todo o mundo mas, também porque se tem verificado uma sensível melhoria da situação de saúde das populações, o que faz com que, “policy makers”, académicos, analistas do setor e “media” tragam as questões de saúde para as primeiras páginas, valorizando-as e tentando melhorar a compreensão sobre o muito complexo processo de prestação em saúde.Não se trata no entanto de uma melhoria que usualmente seja quantificada, ocorrendo até que, se são frequentes, as tentativas de medir os custos e a produção da saúde, setor que tem uma importante dimensão económica, o mesmo não se verifica em relação aos seus resultados (o impacto que os cuidados tiveram na saúde das populações) e ainda menos em relação aos chamados “ganhos em saúde”, afinal o objectivo maior dos sistemas de saúde.Assim, entre a subida das despesas e a melhoria dos resultados, há uma falta de relacionamento que torna difícil fazer um balanço, pelo que é urgente adotar modelos de avaliação da prestação e dos seus resultados que sejam explícitos e ajudem a validar a efetividade da prestação e dos resultados obtidos. O presente trabalho pretende ser um contributo para clarificar esta questão e procurar um indicador corrente que possa ser utilizado para objetivar os “ganhos em saúde” e que, por ser quantificável, possa permitir a definição de medidas de efetividade dos resultados obtidos e de avaliação da performance dos sistemas de saúde.Não será mais uma medida de medição da produção (outputs) mas que pode resolver muitos problemas de há longos anos, e dar suporte ao confronto recursos/resultados e permitindo avaliar a performance de sistemas de saúde, com consistência face aos seus objectivos e fiabilidade, sendo capaz de detetar as mudanças e de mostrar as diferenças.
Resumo:
O processo de gestão de risco consiste, no estudo estruturado de todos os aspetos inerentes ao trabalho e é composto pela análise de risco, avaliação de risco e controlo de risco. Na análise de risco, é efetuada a identificação de todos os perigos presentes e a estimação da probabilidade e da gravidade, de acordo com o método de avaliação de risco escolhido. Este estudo centra-se na primeira etapa do processo de avaliação de risco, mais especificamente na análise de risco e nos marcadores de informação necessários para se efetuar a estimação de risco na industria extrativa a céu aberto (atividade de risco elevado). Considerando que o nível de risco obtido, depende fundamentalmente da estimação da probabilidade e da gravidade, ajustada a cada situação de risco, procurou-se identificar os marcadores e compreender a sua influência nos resultados da avaliação de risco (magnitude). O plano de trabalhos de investigação foi sustentado por uma metodologia qualitativa de recolha, registo e análise dos dados. Neste estudo, a recolha de informação foi feita com recurso às seguintes técnicas de investigação: - Observação estruturada e planeada do desmonte da rocha com recurso a explosivos; - Entrevista individual de formadores e gestores de risco (amostragem de casos típicos); Na análise e discussão qualitativa dos dados das entrevistas recorreu-se às seguintes técnicas: - Triangulação de analistas e tratamento de dados cognitiva (técnicas complementares); - Aposição dos marcadores de informação, versus, três métodos de avaliação de risco validados. Os resultados obtidos apontam no sentido das hipóteses de investigação formuladas, ou seja, o tipo de risco influi da seleção da informação e, existem diferenças significativas no nível de risco obtido, quando na estimação da probabilidade e da gravidade são utilizados marcadores de informação distintos.
Resumo:
The Kasparov-World match was initiated by Microsoft with sponsorship from the bank First USA. The concept was that Garry Kasparov as White would play the rest of the world on the Web: one ply would be played per day and the World Team was to vote for its move. The Kasparov-World game was a success from many points of view. It certainly gave thousands the feeling of facing the world’s best player across the board and did much for the future of the game. Described by Kasparov as “phenomenal ... the most complex in chess history”, it is probably a worthy ‘Greatest Game’ candidate. Computer technology has given chess a new mode of play and taken it to new heights: the experiment deserves to be repeated. We look forward to another game and experience of this quality although it will be difficult to surpass the event we have just enjoyed. We salute and thank all those who contributed - sponsors, moderator, coaches, unofficial analysts, organisers, technologists, voters and our new friends.
Resumo:
Capillary electrophoresis (CE) offers the analyst a number of key advantages for the analysis of the components of foods. CE offers better resolution than, say, high-performance liquid chromatography (HPLC), and is more adept at the simultaneous separation of a number of components of different chemistries within a single matrix. In addition, CE requires less rigorous sample cleanup procedures than HPLC, while offering the same degree of automation. However, despite these advantages, CE remains under-utilized by food analysts. Therefore, this review consolidates and discusses the currently reported applications of CE that are relevant to the analysis of foods. Some discussion is also devoted to the development of these reported methods and to the advantages/disadvantages compared with the more usual methods for each particular analysis. It is the aim of this review to give practicing food analysts an overview of the current scope of CE.
Resumo:
Combining geological knowledge with proved plus probable ('2P') oil discovery data indicates that over 60 countries are now past their resource-limited peak of conventional oil production. The data show that the global peak of conventional oil production is close. Many analysts who rely only on proved ('1P') oil reserves data draw a very different conclusion. But proved oil reserves contain no information about the true size of discoveries, being variously under-reported, over-reported and not reported. Reliance on 1P data has led to a number of misconceptions, including the notion that past oil forecasts were incorrect, that oil reserves grow very significantly due to technology gain, and that the global supply of oil is ensured provided sufficient investment is forthcoming to 'turn resources into reserves'. These misconceptions have been widely held, including within academia, governments, some oil companies, and organisations such as the IEA. In addition to conventional oil, the world contains large quantities of non-conventional oil. Most current detailed models show that past the conventional oil peak the non-conventional oils are unlikely to come on-stream fast enough to offset conventional's decline. To determine the extent of future oil supply constraints calculations are required to determine fundamental rate limits for the production of non-conventional oils, as well as oil from gas, coal and biomass, and of oil substitution. Such assessments will need to examine technological readiness and lead-times, as well as rate constraints on investment, pollution, and net-energy return. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A large and complex IT project may involve multiple organizations and be constrained within a temporal period. An organization is a system comprising of people, activities, processes, information, resources and goals. Understanding and modelling such a project and its interrelationship with relevant organizations are essential for organizational project planning. This paper introduces the problem articulation method (PAM) as a semiotic method for organizational infrastructure modelling. PAM offers a suite of techniques, which enables the articulation of the business, technical and organizational requirements, delivering an infrastructural framework to support the organization. It works by eliciting and formalizing (e. g. processes, activities, relationships, responsibilities, communications, resources, agents, dependencies and constraints) and mapping these abstractions to represent the manifestation of the "actual" organization. Many analysts forgo organizational modelling methods and use localized ad hoc and point solutions, but this is not amenable for organizational infrastructures modelling. A case study of the infrared atmospheric sounding interferometer (IASI) will be used to demonstrate the applicability of PAM, and to examine its relevancy and significance in dealing with the innovation and changes in the organizations.
Resumo:
Context: During development managers, analysts and designers often need to know whether enough requirements analysis work has been done and whether or not it is safe to proceed to the design stage. Objective: This paper describes a new, simple and practical method for assessing our confidence in a set of requirements. Method: We identified 4 confidence factors and used a goal oriented framework with a simple ordinal scale to develop a method for assessing confidence. We illustrate the method and show how it has been applied to a real systems development project. Results: We show how assessing confidence in the requirements could have revealed problems in this project earlier and so saved both time and money. Conclusion: Our meta-level assessment of requirements provides a practical and pragmatic method that can prove useful to managers, analysts and designers who need to know when sufficient requirements analysis has been performed.
Resumo:
One of the most vexing issues for analysts and managers of property companies across Europe has been the existence and persistence of deviations of Net Asset Values of property companies from their market capitalisation. The issue has clear links to similar discounts and premiums in closed-end funds. The closed end fund puzzle is regarded as an important unsolved problem in financial economics undermining theories of market efficiency and the Law of One Price. Consequently, it has generated a huge body of research. Although it can be tempting to focus on the particular inefficiencies of real estate markets in attempting to explain deviations from NAV, the closed end fund discount puzzle indicates that divergences between underlying asset values and market capitalisation are not a ‘pure’ real estate phenomenon. When examining potential explanations, two recurring factors stand out in the closed end fund literature as often undermining the economic rationale for a discount – the existence of premiums and cross-sectional and periodic fluctuations in the level of discount/premium. These need to be borne in mind when considering potential explanations for real estate markets. There are two approaches to investigating the discount to net asset value in closed-end funds: the ‘rational’ approach and the ‘noise trader’ or ‘sentiment’ approach. The ‘rational’ approach hypothesizes the discount to net asset value as being the result of company specific factors relating to such factors as management quality, tax liability and the type of stocks held by the fund. Despite the intuitive appeal of the ‘rational’ approach to closed-end fund discounts the studies have not successfully explained the variance in closed-end fund discounts or why the discount to net asset value in closed-end funds varies so much over time. The variation over time in the average sector discount is not only a feature of closed-end funds but also property companies. This paper analyses changes in the deviations from NAV for UK property companies between 2000 and 2003. The paper present a new way to study the phenomenon ‘cleaning’ the gearing effect by introducing a new way of calculating the discount itself. We call it “ungeared discount”. It is calculated by assuming that a firm issues new equity to repurchase outstanding debt without any variation on asset side. In this way discount does not depend on an accounting effect and the analysis should better explain the effect of other independent variables.
Resumo:
The performance of various statistical models and commonly used financial indicators for forecasting securitised real estate returns are examined for five European countries: the UK, Belgium, the Netherlands, France and Italy. Within a VAR framework, it is demonstrated that the gilt-equity yield ratio is in most cases a better predictor of securitized returns than the term structure or the dividend yield. In particular, investors should consider in their real estate return models the predictability of the gilt-equity yield ratio in Belgium, the Netherlands and France, and the term structure of interest rates in France. Predictions obtained from the VAR and univariate time-series models are compared with the predictions of an artificial neural network model. It is found that, whilst no single model is universally superior across all series, accuracy measures and horizons considered, the neural network model is generally able to offer the most accurate predictions for 1-month horizons. For quarterly and half-yearly forecasts, the random walk with a drift is the most successful for the UK, Belgian and Dutch returns and the neural network for French and Italian returns. Although this study underscores market context and forecast horizon as parameters relevant to the choice of the forecast model, it strongly indicates that analysts should exploit the potential of neural networks and assess more fully their forecast performance against more traditional models.
Resumo:
This chapter examines the workings of urban microclimates and looks at the associated causes and effects of the urban heat island (UHI). It also clarifies the relationship between urban form and the key climatic parameters (sun, daylight, wind, temperature). A particular section is devoted to the concepts of UHI intensity and sky view factor (SVF); these are useful indicators for researchers in this area. The challenge of how to model urban microclimates is covered, featuring the six archetypal urban forms familiar to analysts involved in using simulation software. The latter sections address the issue of urban thermal comfort, the importance of urban ventilation and finally what mitigating strategies can be implemented to curb negative UHI effects.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
Construction professional service (CPS) firms sell expertise and provide innovative solutions for projects founded on their knowledge, experience, and technical competences. Large CPS firms seeking to grow will often seek new opportunities in their domestic market and overseas by organic or inorganic growth through mergers, alliances, and acquisitions. Growth can also come from increasing market penetration through vertical, horizontal, and lateral diversification. Such growth, hopefully, leads to economies of scope and scale in the long term, but it can also lead to diseconomies, when the added cost of integration and the increased complexity of diversification no longer create tangible and intangible benefits. The aim of this research is to investigate the key influences impacting on the growth in scope and scale for large CPS firms. Qualitative data from the interviews were underpinned by secondary data from CPS firms’ annual reports and analysts’ findings. The findings showed five key influences on the scope and scale of a CPS firm: the importance of growth as a driver; the influence of the ownership of the firm on the decision for growth in scope and scale; the optimization of resources and capabilities; the need to serve changing clients’ needs; and the importance of localization. The research provides valuable insights into the growth strategies of international CPS firms. A major finding of the research is the influence of ownership on CPS firms’ growth strategies which has not been highlighted in previous research.
Resumo:
The “cotton issue” has been a topic of several academic discussions for trade policy analysts. However the design of trade and agricultural policy in the EU and the USA has become a politically sensitive matter throughout the last five years. This study utilizing the Agricultural Trade Policy Simulation Model (ATPSM) aims to gain insights into the global cotton market, to explain why domestic support for cotton has become an issue, to quantify the impact of the new EU agricultural policy on the cotton sector, and to measure the effect of eliminating support policies on production and trade. Results indicate that full trade liberalization would lead the four West African countries to better terms of trade with the EU. If tariff reduction follows the so-called Swiss formula, world prices would increase by 3.5%.
Resumo:
Dominant paradigms of causal explanation for why and how Western liberal-democracies go to war in the post-Cold War era remain versions of the 'liberal peace' or 'democratic peace' thesis. Yet such explanations have been shown to rest upon deeply problematic epistemological and methodological assumptions. Of equal importance, however, is the failure of these dominant paradigms to account for the 'neoliberal revolution' that has gripped Western liberal-democracies since the 1970s. The transition from liberalism to neoliberalism remains neglected in analyses of the contemporary Western security constellation. Arguing that neoliberalism can be understood simultaneously through the Marxian concept of ideology and the Foucauldian concept of governmentality – that is, as a complementary set of 'ways of seeing' and 'ways of being' – the thesis goes on to analyse British security in policy and practice, considering it as an instantiation of a wider neoliberal way of war. In so doing, the thesis draws upon, but also challenges and develops, established critical discourse analytic methods, incorporating within its purview not only the textual data that is usually considered by discourse analysts, but also material practices of security. This analysis finds that contemporary British security policy is predicated on a neoliberal social ontology, morphology and morality – an ideology or 'way of seeing' – focused on the notion of a globalised 'network-market', and is aimed at rendering circulations through this network-market amenable to neoliberal techniques of government. It is further argued that security practices shaped by this ideology imperfectly and unevenly achieve the realisation of neoliberal 'ways of being' – especially modes of governing self and other or the 'conduct of conduct' – and the re-articulation of subjectivities in line with neoliberal principles of individualism, risk, responsibility and flexibility. The policy and practice of contemporary British 'security' is thus recontextualised as a component of a broader 'neoliberal way of war'.