954 resultados para Process capability analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of free reserves in a non-life insurance portfolio as defined in the classical model of risk theory is modified by the introduction of dividend policies that set maximum levels for the accumulation of reserves. The first part of the work formulates the quantification of the dividend payments via the expectation of their current value under diferent hypotheses. The second part presents a solution based on a system of linear equations for discrete dividend payments in the case of a constant dividend barrier, illustrated by solving a specific case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT In S. cerevisiae, the protein phosphatase Cdc14pwt is essential far mitotic exit through its contribution to reducing mitotic CDK activity. But Cdc14pwt also acts as a mare general temporal coordinator of mid and late mitotic events by controlling the partitioning of DNA, microtubule stability and cytokinesis. Cdc14pwt orthologs are well conserved from yeasts to humans, and sequence comparison revealed the presence of three domains, A, B and C, of which A and B form the catalytic domain. Cdc14pwt orthologs are regulated (in part) through cell cycle dependent changes in their localization. Some of them are thought to be kept inactive by sequestration in the nucleolus during interphase. This is the case for flp1pwt, the single identified Cdc14pwt ortholog in the fission yeast S. pombe. In early mitosis, flp1pwt leaves the nucleolus and localizes to the kinetochores, the contractile ring and the mitotic spindle, suggesting that it has multiple substrates and regulates many mitotic processes. flp1D cells show a high chromosome loss rate and septation defects, suggesting a role for flp1wt in the fidelity of chromosome transmission and cytokinesis. The aim of this study is to characterize the mechanisms underlying flp1pwt functions and the control of its activity. A structure-function analysis has revealed that the presence of both A and B domains is required for biological function and for proper flp1pwt mitotic localization. In contrast, the C domain of flp1pwt is responsible for its proper nucleolar localization in G2/interphase. My data suggest that dephosphorylation of substrates by flp1pwt is not necessary for any changes in localization of flp1pwt except that at the medial ring. In that particular case, the catalytic activity of flp1pwt is required for efficient localization, therefore revealing an additional level of regulation. All the functions of flp1pwt assayed to date require its catalytic activity, emphasizing the importance of further identification of its substrates. As described for other orthologs, the capability of selfinteraction and phosphorylation status might help to control flp1pwt activity. My data suggest that flp1pwt forms oligomers in vivo and that phosphorylation is not essential far localization changes of the protein. In addition, the hypophosphorylated form of flp1pwt might be specifically involved in the promotion of cytokinesis. The results of this study suggest that multiple modes of regulation including localization, selfassociation and phosphorylation allow a fine-tuning regulation of flp1pwt phosphatase activity, and more generally that of Cdc14pwt family of phosphatases. RESUME Chez la levure S. cerevisiae, la protéine phosphatase Cdc14pwt est essentielle pour la sortie de mitose du fait de sa contribution dans la réduction d'activité des CDK mitotiques. Comme elle contrôle également le partage de l'ADN, la stabilité des microtubules et la cytokinèse, Cdc14pwt est en fait considérée comme un coordinateur temporel général des évènements de milieu et de fin de mitose. Les orthologues de Cdc14pwt sont bien conservés, des levures jusqu'à l'espèce humaine. Des comparaisons de séquence ont révélé la présence de trois domaines A, B et C, les deux premiers constituant le domaine catalytique. Ils sont régulés (en partie) via des changements dans leur localisation, eux-mêmes dépendants du cycle cellulaire. Plusieurs de ces orthologues sont supposés inactivés par séquestration dans le nucléole en interphase, ce qui est le cas de flp1pwt le seul orthologue de Cdc14pwt identifié chez la levure fissipare S, pombe. En début de mitose, flp1pwt quitte le nucléole et localise au niveau des kinetochores, de l'anneau contractile d'actine et du fuseau mitotique, ce qui laisse supposer de multiples substrats et fonctions. Comme les cellules délétées pour le gène flp1wt présentent un taux élevé de perte de chromosome et des défauts de septation, flp1pwt semble jouer un rôle dans la fidélité de la transmission du matériel génétique et la cytokinèse. Le but de cette étude est de caractériser les mécanismes impliqués dans les fonctions assurées par flp1pwt d'une part, et dans le contrôle de son activité d'autre part. Une analyse structure-fonction a révélé que la présence simultanée des deux domaines A et B est requise pour la fonction biologique de flp1pwt et sa localisation correcte pendant la mitose. Par contre, le domaine C de flp1pwt confère une localisation nucléolaire adéquate en G2/interphase. Mes données suggèrent que la déphosphorylation de substrats par flp1pwt est dispensable pour sa localisation correcte excepté celle à l'anneau médian, qui requiert dans ce cas, l'activité catalytique de flp1pwt, révélant ainsi un niveau de régulation supplémentaire. Toutes les fonctions de flp1 pwt testées jusqu'à présent nécessitent également son activité catalytique, ce qui accentue l'importance de l'identification future de ses substrats. Comme cela a déjà été décrit pour d'autres orthologues, la capacité d'auto-intéraction et le niveau de phosphorylation pourraient contrôler l'activité de flp1pwt. En effet, mes données suggèrent que flp1pwt forme des oligomères in vivo et que la phosphorylation n'est pas essentielle pour les changements de localisation observés pour la protéine. De plus, la forme hypophosphorylée de flp1pwt pourrait être spécifiquement impliquée dans la promotion de la cytokinèse. De multiples modes de régulation incluant la localisation, l'auto-association et la phosphorylation semblent permettre un contrôle fin et subtil de l'activité de la phosphatase flp1pwt, et plus généralement celle des protéines de la famille de Cdc14pwt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the relationship between spatial density of economic activity and interregional differences in the productivity of industrial labour in Spain during the period 1860-1999. In the spirit of Ciccone and Hall (1996) and Ciccone (2002), we analyze the evolution of this relationship over the long term in Spain. Using data on the period 1860-1999 we show the existence of an agglomeration effect linking the density of economic activity with labour productivity in the industry. This effect was present since the beginning of the industrialization process in the middle of the 19th century but has been decreasing over time. The estimated elasticity of labour productivity with respect to employment density was close to 8% in the subperiod 1860-1900, reduces to a value of around 7% in the subperiod 1914-1930, to 4% in the subperiod 1965-1979 and becomes insignificant in the final subperiod 1985-1999. At the end of the period analyzed there is no evidence of the existence of net agglomeration effects in the industry. This result could be explained by an important increase in the congestion effects in large industrial metropolitan areas that would have compensated the centripetal or agglomeration forces at work. Furthermore, this result is also consistent with the evidence of a dispersion of industrial activity in Spain during the last decades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Production flow analysis (PFA) is a well-established methodology used for transforming traditional functional layout into product-oriented layout. The method uses part routings to find natural clusters of workstations forming production cells able to complete parts and components swiftly with simplified material flow. Once implemented, the scheduling system is based on period batch control aiming to establish fixed planning, production and delivery cycles for the whole production unit. PFA is traditionally applied to job-shops with functional layouts, and after reorganization within groups lead times reduce, quality improves and motivation among personnel improves. Several papers have documented this, yet no research has studied its application to service operations management. This paper aims to show that PFA can well be applied not only to job-shop and assembly operations, but also to back-office and service processes with real cases. The cases clearly show that PFA reduces non-value adding operations, introduces flow by evening out bottlenecks and diminishes process variability, all of which contribute to efficient operations management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a process of mining research & development abstract databases to profile current status and to project potential developments for target technologies, The process is called "technology opportunities analysis." This article steps through the process using a sample data set of abstracts from the INSPEC database on the topic o "knowledge discovery and data mining." The paper offers a set of specific indicators suitable for mining such databases to understand innovation prospects. In illustrating the uses of such indicators, it offers some insights into the status of knowledge discovery research*.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Didactic knowledge about contents is constructed through an idiosyncratic synthesis between knowledge about the subject area, students' general pedagogical knowledge and the teacher's biography. This study aimed to understand the construction process and the sources of Pedagogical Content Knowledge, as well as to analyze its manifestations and variations in interactive teaching by teachers whom the students considered competent. Data collection involved teachers from an undergraduate nursing program in the South of Brazil, through non-participant observation and semistructured interviews. Data analysis was submitted to the constant comparison method. The results disclose the need for initial education to cover pedagogical aspects for nurses; to assume permanent education as fundamental in view of the complexity of contents and teaching; to use mentoring/monitoring and the value learning with experienced teachers with a view to the development of quality teaching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prediction of rockfall travel distance below a rock cliff is an indispensable activity in rockfall susceptibility, hazard and risk assessment. Although the size of the detached rock mass may differ considerably at each specific rock cliff, small rockfall (<100 m3) is the most frequent process. Empirical models may provide us with suitable information for predicting the travel distance of small rockfalls over an extensive area at a medium scale (1:100 000¿1:25 000). "Solà d'Andorra la Vella" is a rocky slope located close to the town of Andorra la Vella, where the government has been documenting rockfalls since 1999. This documentation consists in mapping the release point and the individual fallen blocks immediately after the event. The documentation of historical rockfalls by morphological analysis, eye-witness accounts and historical images serve to increase available information. In total, data from twenty small rockfalls have been gathered which reveal an amount of a hundred individual fallen rock blocks. The data acquired has been used to check the reliability of the main empirical models widely adopted (reach and shadow angle models) and to analyse the influence of parameters which affecting the travel distance (rockfall size, height of fall along the rock cliff and volume of the individual fallen rock block). For predicting travel distances in maps with medium scales, a method has been proposed based on the "reach probability" concept. The accuracy of results has been tested from the line entailing the farthest fallen boulders which represents the maximum travel distance of past rockfalls. The paper concludes with a discussion of the application of both empirical models to other study areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers should continuously ask how to improve the models we rely on to make financial decisions in terms of the planning, design, construction, and maintenance of roadways. This project presents an alternative tool that will supplement local decision making but maintain a full appreciation of the complexity and sophistication of today’s regional model and local traffic impact study methodologies. This alternative method is tailored to the desires of local agencies, which requested a better, faster, and easier way to evaluate land uses and their impact on future traffic demands at the sub-area or project corridor levels. A particular emphasis was placed on scenario planning for currently undeveloped areas. The scenario planning tool was developed using actual land use and roadway information for the communities of Johnston and West Des Moines, Iowa. Both communities used the output from this process to make regular decisions regarding infrastructure investment, design, and land use planning. The City of Johnston case study included forecasting future traffic for the western portion of the city within a 2,600-acre area, which included 42 intersections. The City of West Des Moines case study included forecasting future traffic for the city’s western growth area covering over 30,000 acres and 331 intersections. Both studies included forecasting a.m. and p.m. peak-hour traffic volumes based upon a variety of different land use scenarios. The tool developed took goegraphic information system (GIS)-based parcel and roadway information, converted the data into a graphical spreadsheet tool, allowed the user to conduct trip generation, distribution, and assignment, and then to automatically convert the data into a Synchro roadway network which allows for capacity analysis and visualization. The operational delay outputs were converted back into a GIS thematic format for contrast and further scenario planning. This project has laid the groundwork for improving both planning and civil transportation decision making at the sub-regional, super-project level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In line with educational issues involved in emergent literacy practices in preschool, in particular those concerning comprehension processes, this paper focuses on picture-based narrative comprehension during an interactive reading session of a wordless picture book, involving a group of children aged three and their teacher. Children are asked to make inferences about the meaning and outcome of the story, a procedure which gradually elicits their responses on how events link together, thus enhancing their capacity to use prior and implicit knowledge to build the story meaning. Moreover, this study highlights the importance of interaction for developing comprehension. Data collected was analysed following didactic microgenesis, an analytical approach showing that knowledge built during interaction depends on the joint construction of a zone of common meaning by which teacher and children try to adjust to each other. In order to help the process of merging different meanings of the story built online, a text written by researchers, following the narrative structure of the story, was read by the teacher after the picture-based reading. This led us to examine through interactional analysis which semiotic cues were used during recall on the following day, as an additional measure of knowledge construction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT : A firm's competitive advantage can arise from internal resources as well as from an interfirm network. -This dissertation investigates the competitive advantage of a firm involved in an innovation network by integrating strategic management theory and social network theory. It develops theory and provides empirical evidence that illustrates how a networked firm enables the network value and appropriates this value in an optimal way according to its strategic purpose. The four inter-related essays in this dissertation provide a framework that sheds light on the extraction of value from an innovation network by managing and designing the network in a proactive manner. The first essay reviews research in social network theory and knowledge transfer management, and identifies the crucial factors of innovation network configuration for a firm's learning performance or innovation output. The findings suggest that network structure, network relationship, and network position all impact on a firm's performance. Although the previous literature indicates that there are disagreements about the impact of dense or spare structure, as well as strong or weak ties, case evidence from Chinese software companies reveals that dense and strong connections with partners are positively associated with firms' performance. The second essay is a theoretical essay that illustrates the limitations of social network theory for explaining the source of network value and offers a new theoretical model that applies resource-based view to network environments. It suggests that network configurations, such as network structure, network relationship and network position, can be considered important network resources. In addition, this essay introduces the concept of network capability, and suggests that four types of network capabilities play an important role in unlocking the potential value of network resources and determining the distribution of network rents between partners. This essay also highlights the contingent effects of network capability on a firm's innovation output, and explains how the different impacts of network capability depend on a firm's strategic choices. This new theoretical model has been pre-tested with a case study of China software industry, which enhances the internal validity of this theory. The third essay addresses the questions of what impact network capability has on firm innovation performance and what are the antecedent factors of network capability. This essay employs a structural equation modelling methodology that uses a sample of 211 Chinese Hi-tech firms. It develops a measurement of network capability and reveals that networked firms deal with cooperation between, and coordination with partners on different levels according to their levels of network capability. The empirical results also suggests that IT maturity, the openness of culture, management system involved, and experience with network activities are antecedents of network capabilities. Furthermore, the two-group analysis of the role of international partner(s) shows that when there is a culture and norm gap between foreign partners, a firm must mobilize more resources and effort to improve its performance with respect to its innovation network. The fourth essay addresses the way in which network capabilities influence firm innovation performance. By using hierarchical multiple regression with data from Chinese Hi-tech firms, the findings suggest that there is a significant partial mediating effect of knowledge transfer on the relationships between network capabilities and innovation performance. The findings also reveal that the impacts of network capabilities divert with the environment and strategic decision the firm has made: exploration or exploitation. Network constructing capability provides a greater positive impact on and yields more contributions to innovation performance than does network operating capability in an exploration network. Network operating capability is more important than network constructing capability for innovative firms in an exploitation network. Therefore, these findings highlight that the firm can shape the innovation network proactively for better benefits, but when it does so, it should adjust its focus and change its efforts in accordance with its innovation purposes or strategic orientation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chemokines are small chemotactic molecules widely expressed throughout the central nervous system. A number of papers, during the past few years, have suggested that they have physiological functions in addition to their roles in neuroinflammatory diseases. In this context, the best evidence concerns the CXC-chemokine stromal cell-derived factor (SDF-1alpha or CXCL12) and its receptor CXCR4, whose signalling cascade is also implicated in the glutamate release process from astrocytes. Recently, astrocytic synaptic like microvesicles (SLMVs) that express vesicular glutamate transporters (VGLUTs) and are able to release glutamate by Ca(2+)-dependent regulated exocytosis, have been described both in tissue and in cultured astrocytes. Here, in order to elucidate whether SDF-1alpha/CXCR4 system can participate to the brain fast communication systems, we investigated whether the activation of CXCR4 receptor triggers glutamate exocytosis in astrocytes. By using total internal reflection (TIRF) microscopy and the membrane-fluorescent styryl dye FM4-64, we adapted an imaging methodology recently developed to measure exocytosis and recycling in synaptic terminals, and monitored the CXCR4-mediated exocytosis of SLMVs in astrocytes. We analyzed the co-localization of VGLUT with the FM dye at single-vesicle level, and observed the kinetics of the FM dye release during single fusion events. We found that the activation of CXCR4 receptors triggered a burst of exocytosis on a millisecond time scale that involved the release of Ca(2+) from internal stores. These results support the idea that astrocytes can respond to external stimuli and communicate with the neighboring cells via fast release of glutamate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flood effectiveness observations imply that two families of processes describe the formation of debris flow volume. One is related to the rainfall?erosion relationship, and can be seen as a gradual process, and one is related to additional geological/geotechnical events, those named hereafter extraordinary events. In order to discuss the hypothesis of coexistence of two modes of volume formation, some methodologies are applied. Firstly, classical approaches consisting in relating volume to catchments characteristics are considered. These approaches raise questions about the quality of the data rather than providing answers concerning the controlling processes. Secondly, we consider statistical approaches (cumulative number of events distribution and cluster analysis) and these suggest the possibility of having two distinct families of processes. However the quantitative evaluation of the threshold differs from the one that could be obtained from the first approach, but they all agree in the sense of the coexistence of two families of events. Thirdly, a conceptual model is built exploring how and why debris flow volume in alpine catchments changes with time. Depending on the initial condition (sediment production), the model shows that large debris flows (i.e. with important volume) are observed in the beginning period, before a steady-state is reached. During this second period debris flow volume such as is observed in the beginning period is not observed again. Integrating the results of the three approaches, two case studies are presented showing: (1) the possibility to observe in a catchment large volumes that will never happen again due to a drastic decrease in the sediment availability, supporting its difference from gradual erosion processes; (2) that following a rejuvenation of the sediment storage (by a rock avalanche) the magnitude?frequency relationship of a torrent can be differentiated into two phases, the beginning one with large and frequent debris flow and a later one with debris flow less intense and frequent, supporting the results of the conceptual model. Although the results obtained cannot identify a clear threshold between the two families of processes, they show that some debris flows can be seen as pulse of sediment differing from that expected from gradual erosion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This supplementary project has been undertaken as an effort to continue work previously completed in the Pooled Fund Study of Premature Concrete Pavement Deterioration. As such, it shares the objective of "Identifying the variables that are present in those pavements exhibiting premature deterioration," by collecting additional data and performing statistical analysis of those data. The approach and philosophy of this work are identical to that followed in the above project, and the Pooled Fund Study Final Report provides a detailed description of this process. This project has involved the collection of data for additional sites in the state of Iowa. These sites have then been added to sites collected in the original study, and statistical analysis has been performed on the entire set. It is hoped that this will have two major effects. First, using data from only one state allows for the analysis of a larger set of independent variables with a greater degree of commonality than was possible in the multi-state study, since the data are not limited by state to state differences in data collection and retention. Second, more data on additional sites will increase the degrees of freedom in the model and hopefully add confidence to the results.