782 resultados para accounting model design
Resumo:
This paper gives an overview of the project Changing Coastlines: data assimilation for morphodynamic prediction and predictability. This project is investigating whether data assimilation could be used to improve coastal morphodynamic modeling. The concept of data assimilation is described, and the benefits that data assimilation could bring to coastal morphodynamic modeling are discussed. Application of data assimilation in a simple 1D morphodynamic model is presented. This shows that data assimilation can be used to improve the current state of the model bathymetry, and to tune the model parameter. We now intend to implement these ideas in a 2D morphodynamic model, for two study sites. The logistics of this are considered, including model design and implementation, and data requirement issues. We envisage that this work could provide a means for maintaining up-to-date information on coastal bathymetry, without the need for costly survey campaigns. This would be useful for a range of coastal management issues, including coastal flood forecasting.
Resumo:
A dynamic, deterministic, economic simulation model was developed to estimate the costs and benefits of controlling Mycobacterium avium subsp. paratuberculosis (Johne's disease) in a suckler beef herd. The model is intended as a demonstration tool for veterinarians to use with farmers. The model design process involved user consultation and participation and the model is freely accessible on a dedicated website. The 'user-friendly' model interface allows the input of key assumptions and farm specific parameters enabling model simulations to be tailored to individual farm circumstances. The model simulates the effect of Johne's disease and various measures for its control in terms of herd prevalence and the shedding states of animals within the herd, the financial costs of the disease and of any control measures and the likely benefits of control of Johne's disease for the beef suckler herd over a 10-year period. The model thus helps to make more transparent the 'hidden costs' of Johne's in a herd and the likely benefits to be gained from controlling the disease. The control strategies considered within the model are 'no control', 'testing and culling of diagnosed animals', 'improving management measures' or a dual strategy of 'testing and culling in association with improving management measures'. An example 'run' of the model shows that the strategy 'improving management measures', which reduces infection routes during the early stages, results in a marked fall in herd prevalence and total costs. Testing and culling does little to reduce prevalence and does not reduce total costs over the 10-year period.
Resumo:
Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.
Resumo:
Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.
Resumo:
O objetivo central desta pesquisa foi o de explorar, descrever e discutir de que forma e em que extensão as empresas se utilizam das informações contábeis tratadas por mecanismos que considerem os reflexos das variações no poder aquisitivo da moeda (Inflação) e das flutuações específicas de preços sobre os resultado e patrimônio empresarial, nos seus sistemas de controles internos e, consequentemente, em seus processos de tomada de decisão, avaliação de desempenho e fixação de tendências para o empreendimento. Em função dos objetivos propostos, a operacionalização desta pesquisa foi desenvolvida em duas etapas. A primeira fase, de caráter qualitativo, constituiu-se no levantamento dos dados básicos necessários ao atingimento dos objetivos desejados, através da aplicação de questionários em 12 (doze) empresas. Através dos dados levantados, foi possível inferir-se que o sistema contábil configura-se na base primária dos relatórios gerenciais das empresas pesquisadas. o modelo da Correção Integral teve sua validade conceitual confirmada, não obstante algumas simpliflcações metodológicas adotadas e o problema da lnadequacidade do padrão monetário utilizado principalmente em 1990. Em que pese ter havido consenso na indicação das distorções provocadas pelo problema do indexador, apenas duas empresas adotaram mecanismos tendentes a anular seus efeitos. Uma importante constatação também, foi o fato de que nas empresas em que se utilizavam sistemas contábeis baseados na metodologia da Lei Societária, como fonte primária, para geração de relatórios gerenciais, as suas adminlstrações não utilizavam tais relatórios como suporte aos seus processos de tomada de decisão, como por exemplo, no processo de estabelecimento de preços. Ressalte-se, também, que o pouco conhecimento da filosofia conceitual e vantagens informativas do modelo da Contabilidade a Custo Corrente Corrigido, tem impedido a sua implantação de forma mais ampla. Saliente-se que 60% das empresas da amostra já se utilizam de mecanismos de avaliação de ativos não monetários, com base em valores correntes (captaçao dos efeitos das flutuações especificas de preços), só que restritos aos itens circulantes (estoques) . Este é, certamente, o caminho para a aplicação plena do modelo, principalmente no âmbito gerencial. A segunda fase, cujo enfoque direcionou-se aos aspectos quantitativos, através do cálculo de indicadores econômicos-financeiros extraídos das demonstrações contábeis das empresas pesquisadas, relativos ao exercício de 1990, e elaborados em consonân ia com as metodologias da Correção Integral e da Lei Societária, buscou-se , mediante análise comparativa, avaliar as pricipais distorções encolvidas, que corroboram, de certa forma, os comentários feitos no parágrafo anterior. A natureza exploratória deste assunto, com deslocamento da ênfase da análise quantitativa, e com enfoque estritamente gerencial, é que o diferencia dos demais desenvolvidos em tempos recentes. Além disso, apresenta como caraterística acessória a busca por informações sugestivas para o desenvolvimento de pesquisas futuras.
Resumo:
Includes bibliography
Resumo:
Objectives: To investigate the effect of fluoride (0, 275 and 1250 ppm F; NaF) in combination with normal and low salivary flow rates on enamel surface loss and fluoride uptake using an erosion-remineralization-abrasion cycling model. Design: Enamel specimens were randomly assigned to 6 experimental groups (n = 8). Specimens were individually placed in custom made devices, creating a sealed chamber on the enamel surface, connected to a peristaltic pump. Citric acid was injected into the chamber for 2 min followed by artificial saliva at 0.5 (normal flow) or 0.05 (low flow) ml/min, for 60 min. This cycle was repeated 4×/day, for 5 days. Toothbrushing with abrasive suspensions containing fluoride was performed for 2 min (15 s of actual brushing) 2×/day. Surface loss was measured by optical profilometry. KOH-soluble fluoride and enamel fluoride uptake were determined after the cycling phase. Data were analysed by two-way ANOVA. Results: No significant interactions between fluoride concentration and salivary flow were observed for any tested variable. Low caused more surface loss than normal flow rate (p < 0.01). At both flow rates, surface loss for 0 was higher than for 275, which did not differ from 1250 ppm F. KOH-soluble and structurally-bound enamel fluoride uptake were significantly different between fluoride concentrations with 1250 > 275 > 0 ppm F (p < 0.01). Conclusions: Sodium fluoride reduced enamel erosion/abrasion, although no additional protection was provided by the higher concentration. Higher erosion progression was observed in low salivary flow rates. Fluoride was not able to compensate for the differences in surface loss between flow rates. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
CONTEXT: Failure of a colorectal anastomosis represents a life-threatening complication of colorectal surgery. Splenic flexure mobilization may contribute to reduce the occurrence of anastomotic complications due to technical flaws. There are no published reports measuring the impact of splenic flexure mobilization on the length of mobilized colon viable to construct a safe colorectal anastomosis. OBJECTIVE: The aim of the present study was to determine the effect of two techniques for splenic flexure mobilization on colon lengthening during open left-sided colon surgery using a cadaver model. DESIGN: Anatomical dissections for left colectomy and colorectal anastomosis at the sacral promontory level were conducted in 20 fresh cadavers by the same team of four surgeons. The effect of partial and full splenic flexure mobilization on the extent of mobilized left colon segment was determined. SETTING: University of Sao Paulo Medical School, Sao Paulo, SP, Brazil. Tertiary medical institution and university hospital. PARTICIPANTS: A team of four surgeons operated on 20 fresh cadavers. RESULTS: The length of resected left colon enabling a tension-free colorectal anastomosis at the level of sacral promontory achieved without mobilizing the splenic flexure was 46.3 (35-81) cm. After partial mobilization of the splenic flexure, an additionally mobilized colon segment measuring 10.7 (2-30) cm was obtained. After full mobilization of the distal transverse colon, a mean 28.3 (10-65) cm segment was achieved. CONCLUSION: Splenic flexure mobilization techniques are associated to effective left colon lengthening for colorectal anastomosis. This result may contribute to decision-making during rectal surgery and low colorectal and coloanal anastomosis.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
One important metaphor, referred to biological theories, used to investigate on organizational and business strategy issues is the metaphor about heredity; an area requiring further investigation is the extent to which the characteristics of blueprints inherited from the parent, helps in explaining subsequent development of the spawned ventures. In order to shed a light on the tension between inherited patterns and the new trajectory that may characterize spawned ventures’ development we propose a model aimed at investigating which blueprints elements might exert an effect on business model design choices and to which extent their persistence (or abandonment) determines subsequent business model innovation. Under the assumption that academic and corporate institutions transmit different genes to their spin-offs, we hence expect to have heterogeneity in elements that affect business model design choices and its subsequent evolution. This is the reason why we carry on a twofold analysis in the biotech (meta)industry: under a multiple-case research design, business model and especially its fundamental design elements and themes scholars individuated to decompose the construct, have been thoroughly analysed. Our purpose is to isolate the dimensions of business model that may have been the object of legacy and the ones along which an experimentation and learning process is more likely to happen, bearing in mind that differences between academic and corporate might not be that evident as expected, especially considering that business model innovation may occur.
Resumo:
The intention of a loan loss provision is the anticipation of the loan's expected losses by adjusting the book value of the loan. Furthermore, this loan loss provision has to be compared to the expected loss according to Basel II and, in the case of a difference, liable equity has to be adjusted. This however assumes that the loan loss provision and the expected loss are based on a similar economic rationale, which is only valid conditionally in current loan loss provisioning methods according to IFRS. Therefore, differences between loan loss provisions and expected losses should only result from different approaches regarding the parameter estimation within each model and not due to different assumptions regarding the outcome of the model. The provisioning and accounting model developed in this paper overcomes the before-mentioned shortcomings and is consistent with an economic rationale of expected losses. Additionally, this model is based on a close-to-market valuation of the loan that is in favor of the basic idea of IFRS. Suggestions for changes in current accounting and capital requirement rules are provided.
Resumo:
Objective: To determine how a clinician’s background knowledge, their tasks, and displays of information interact to affect the clinician’s mental model. Design: Repeated Measure Nested Experimental Design Population, Sample, Setting: Populations were gastrointestinal/internal medicine physicians and nurses within the greater Houston area. A purposeful sample of 24 physicians and 24 nurses were studied in 2003. Methods: Subjects were randomized to two different displays of two different mock medical records; one that contained highlighted patient information and one that contained non-highlighted patient information. They were asked to read and summarize their understanding of the patients aloud. Propositional analysis was used to understand their comprehension of the patients. Findings: Different mental models were found between physicians and nurses given the same display of information. The information they shared was very minor compared to the variance in their mental models. There was additionally more variance within the nursing mental models than the physician mental models given different displays of the same information. Statistically, there was no interaction effect between the display of information and clinician type. Only clinician type could account for the differences in the clinician comprehension and thus their mental models of the cases. Conclusion: The factors that may explain the variance within and between the clinician models are clinician type, and only in the nursing group, the use of highlighting.
Resumo:
Objective. To evaluate the HEADS UP Virtual Molecular Biology Lab, a computer-based simulated laboratory designed to teach advanced high school biology students how to create a mouse model. ^ Design. A randomized clinical control design of forty-four students from two science magnet high schools in Mercedes, Texas was utilized to assess knowledge and skills of molecular laboratory procedures, attitudes towards science and computers as a learning tool, and usability of the program. ^ Measurements. Data was collected using five paper-and-pencil formatted questionnaires and an internal "lab notebook." ^ Results. The Virtual Lab was found to significantly increase student knowledge over time (p<0.005) and with each use (p<0.001) as well as positively increase attitudes towards computers (p<0.001) and skills (p<0.005). No significant differences were seen in science attitude scores.^ Conclusion. These results provide evidence that the HEADS UP Virtual Molecular Biology Lab is a potentially effective educational tool for high school molecular biology education.^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.