869 resultados para PROPORTIONAL HAZARD AND ACCELERATED FAILURE MODELS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most materials, short stress waves are generated during the process of plastic deformation, phase transformation, crack formation and crack growth. These phenomena are applied in acoustic emission (AE) for the detection of material defects in a wide spectrum of areas, ranging from nondestructive testing for the detection of materials defects to monitoring of microseismical activity. AE technique is also used for defect source identification and for failure detection. AE waves consist of P waves (primary longitudinal waves), S waves (shear/transverse waves) and Rayleigh (surface) waves as well as reflected and diffracted waves. The propagation of AE waves in various modes has made the determination of source location difficult. In order to use acoustic emission technique for accurate identification of source, an understanding of wave propagation of the AE signals at various locations in a plate structure is essential. Furthermore, an understanding of wave propagation can also assist in sensor location for optimum detection of AE signals along with the characteristics of the source. In real life, as the AE signals radiate from the source it will result in stress waves. Unless the type of stress wave is known, it is very difficult to locate the source when using the classical propagation velocity equations. This paper describes the simulation of AE waves to identify the source location and its characteristics in steel plate as well as the wave modes. The finite element analysis (FEA) is used for the numerical simulation of wave propagation in thin plate. By knowing the type of wave generated, it is possible to apply the appropriate wave equations to determine the location of the source. For a single plate structure, the results show that the simulation algorithm is effective to simulate different stress waves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Orthopaedic fracture fixation implants are increasingly being designed using accurate 3D models of long bones based on computer tomography (CT). Unlike CT, magnetic resonance imaging (MRI) does not involve ionising radiation and is therefore a desirable alternative to CT. This study aims to quantify the accuracy of MRI-based 3D models compared to CT-based 3D models of long bones. The femora of five intact cadaver ovine limbs were scanned using a 1.5T MRI and a CT scanner. Image segmentation of CT and MRI data was performed using a multi-threshold segmentation method. Reference models were generated by digitising the bone surfaces free of soft tissue with a mechanical contact scanner. The MRI- and CT-derived models were validated against the reference models. The results demonstrated that the CT-based models contained an average error of 0.15mm while the MRI-based models contained an average error of 0.23mm. Statistical validation shows that there are no significant differences between 3D models based on CT and MRI data. These results indicate that the geometric accuracy of MRI based 3D models was comparable to that of CT-based models and therefore MRI is a potential alternative to CT for generation of 3D models with high geometric accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

-

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evaluating the safety of different traffic facilities is a complex and crucial task. Microscopic simulation models have been widely used for traffic management but have been largely neglected in traffic safety studies. Micro simulation to study safety is more ethical and accessible than the traditional safety studies, which only assess historical crash data. However, current microscopic models are unable to mimic unsafe driver behavior, as they are based on presumptions of safe driver behavior. This highlights the need for a critical examination of the current microscopic models to determine which components and parameters have an effect on safety indicator reproduction. The question then arises whether these safety indicators are valid indicators of traffic safety. The safety indicators were therefore selected and tested for straight motorway segments in Brisbane, Australia. This test examined the capability of a micro-simulation model and presents a better understanding of micro-simulation models and how such models, in particular car following models can be enriched to present more accurate safety indicators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crisis holds the potential for profound change in organizations and industries. The past 50 years of crisis management highlight key shifts in crisis practice, creating opportunities for multiple theories and research tracks. Defining crises such as Tylenol, Exxon Valdez, and September 11 terrorist attacks have influenced or challenged the principles of best practice of crisis communication in public relations. This study traces the development of crisis process and practice by identifying shifts in crisis research and models and mapping these against key management theories and practices. The findings define three crisis domains: crisis planning, building and testing predictive models, and mapping and measuring external environmental influences. These crisis domains mirror but lag the evolution of management theory, suggesting challenges for researchers to reshape the research agenda to close the gap and lead the next stage of development in the field of crisis communication for effective organizational outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The compressed gas industry and government agencies worldwide utilize "adiabatic compression" testing for qualifying high-pressure valves, regulators, and other related flow control equipment for gaseous oxygen service. This test methodology is known by various terms including adiabatic compression testing, gaseous fluid impact testing, pneumatic impact testing, and BAM testing as the most common terms. The test methodology will be described in greater detail throughout this document but in summary it consists of pressurizing a test article (valve, regulator, etc.) with gaseous oxygen within 15 to 20 milliseconds (ms). Because the driven gas1 and the driving gas2 are rapidly compressed to the final test pressure at the inlet of the test article, they are rapidly heated by the sudden increase in pressure to sufficient temperatures (thermal energies) to sometimes result in ignition of the nonmetallic materials (seals and seats) used within the test article. In general, the more rapid the compression process the more "adiabatic" the pressure surge is presumed to be and the more like an isentropic process the pressure surge has been argued to simulate. Generally speaking, adiabatic compression is widely considered the most efficient ignition mechanism for directly kindling a nonmetallic material in gaseous oxygen and has been implicated in many fire investigations. Because of the ease of ignition of many nonmetallic materials by this heating mechanism, many industry standards prescribe this testing. However, the results between various laboratories conducting the testing have not always been consistent. Research into the test method indicated that the thermal profile achieved (i.e., temperature/time history of the gas) during adiabatic compression testing as required by the prevailing industry standards has not been fully modeled or empirically verified, although attempts have been made. This research evaluated the following questions: 1) Can the rapid compression process required by the industry standards be thermodynamically and fluid dynamically modeled so that predictions of the thermal profiles be made, 2) Can the thermal profiles produced by the rapid compression process be measured in order to validate the thermodynamic and fluid dynamic models; and, estimate the severity of the test, and, 3) Can controlling parameters be recommended so that new guidelines may be established for the industry standards to resolve inconsistencies between various test laboratories conducting tests according to the present standards?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2002, the United Nations Office on Drugs and Crime (UNODC) issued a report entitled Results of a pilot survey of forty selected organized criminal groups in sixteen countries which established five models of organised crime. This paper reviews these and other common organised crime models and drug trafficking models, and applies them to cases of South East Asian drug trafficking in the Australian state of Queensland. The study tests the following hypotheses: (1) South-East Asian drug trafficking groups in Queensland will operate within a criminal network or core group; (2) Wholesale drug distributors in Queensland will not fit consistently under any particular UN organised crime model; and (3) Street dealers will have no organisational structure. The study concluded that drug trafficking or importation closely resembles a criminal network or core group structure. Wholesale dealers did not fit consistently into any UN organised crime model. Street dealers had no organisational structure as an organisational structure is typically found in mid- to high-level drug trafficking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Depression is a major public health problem worldwide and is currently ranked second to heart disease for years lost due to disability. For many decades, international research has found that depressive symptoms occur more frequently among low socioeconomic (SES) individuals than their more-advantaged peers. However, the reasons as to why those of low socioeconomic groups suffer more depressive symptoms are not well understood. Studies investigating the prevalence of depression and its association with SES emanate largely from developed countries, with little research among developing countries. In particular, there is a serious dearth of research on depression and no investigation of its association with SES in Vietnam. The aims of the research presented in this Thesis are to: estimate the prevalence of depressive symptoms among Vietnamese adults, examine the nature and extent of the association between SES and depression and to elucidate causal pathways linking SES to depressive symptoms Methods The research was conducted between September 2008 and November 2009 in Hue city in central Vietnam and used a combination of qualitative (in-depth interviews) and quantitative (survey) data collection methods. The qualitative study contributed to the development of the theoretical model and to the refinement of culturally-appropriate data collection instruments for the quantitative study. The main survey comprised a cross-sectional population–based survey with randomised cluster sampling. A sample of 1976 respondents aged between 25-55 years from ten randomly-selected residential zones (quarters) of Hue city completed the questionnaire (response rate 95.5%). Measures SES was classified using three indicators: education, occupation and income. The Center for Epidemiologic Studies-Depression (CES-D) scale was used to measure depressive symptoms (range0-51, mean=11.0, SD=8.5). Three cut-off points for the CES-D scores were applied: ‘at risk for clinical depression’ (16 or above), ‘depressive symptoms’ (above 21) and ‘depression’ (above 25). Six psychosocial indicators: life time trauma, chronic stress, recent life events, social support, self esteem, and mastery were hypothesized to mediate the association between SES and depressive symptoms. Analyses The prevalence of depressive symptoms were analysed using bivariate analyses. The multivariable analytic phase comprised of ordinary least squares regression, in accordance with Baron and Kenny’s three-step framework for mediation modeling. All analyses were adjusted for a range of confounders, including age, marital status, smoking, drinking and chronic diseases and the mediation models were stratified by gender. Results Among these Vietnamese adults, 24.3% were at or above the cut-off for being ‘at risk for clinical depression’, 11.9% were classified as having depressive symptoms and 6.8% were categorised as having depression. SES was inversely related to depressive symptoms: the least educated those with low occupational status or with the lowest incomes reported more depressive symptoms. Socioeconomicallydisadvantaged individuals were more likely to report experiencing stress (life time trauma, chronic stress or recent life events), perceived less social support and reported fewer personal resources (self esteem and mastery) than their moreadvantaged counterparts. These psychosocial resources were all significantly associated with depressive symptoms independent of SES. Each psychosocial factor showed a significant mediating effect on the association between SES and depressive symptoms. This was found for all measures of SES, and for males and females. In particular, personal resources (mastery, self esteem) and chronic stress accounted for a substantial proportion of the variation in depressive symptoms between socioeconomic groups. Social support and recent life events contributed modestly to socioeconomic differences in depressive symptoms, whereas lifetime trauma contributed the least to these inequalities. Conclusion This is the first known study in Vietnam or any developing country to systematically examine the extent to which psychosocial factors mediate the relationship between SES and depression. The study contributes new evidence regarding the burden of depression in Vietnam. The findings have practical relevance for advocacy, for mental health promotion and health-care services, and point to the need for programs that focus on building a sense of personal mastery and self esteem. More broadly, the work presented in this Thesis contributes to the international scientific literature on the social determinants of depression.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article provides a detailed critique of the incentives-access binary in copyright discourse. Mainstream copyright theory generally accepts that copyright is a balance between providing incentives to authors to invest in the production of cultural works and enhancing the dissemination of those works to the public. This Article argues that dominant copyright theory obscures the possibility of developing a model of copyright that is able to support authors without necessarily limiting access to creative works. The abundance that the Internet allows suggests that increasing access to cultural works to enhance learning, sharing, and creative play should be a fundamental goal of copyright policy. This Article examines models of supporting and coordinating cultural production without exclusivity, including crowdfunding, tips, levies, restitution, and service-based models. In their current forms, each of these models fails to provide a cohesive and convincing vision of the two main functions of copyright: instrumentally (how cultural production can be funded) and fairness (how authors can be adequately rewarded). This article provides three avenues for future research to investigate the viability of alternate copyright models: (1) a better theory of fairness in copyright rewards; (2) more empirical study of commons models of cultural production; and (3) a critical examination of the noneconomic harm limiting function that exclusivity in copyright provides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of actively managing and analyzing business processes is acknowledged more than ever in organizations nowadays. Business processes form an essential part of an organization and their ap-plication areas are manifold. Most organizations keep records of various activities that have been carried out for auditing purposes, but they are rarely used for analysis purposes. This paper describes the design and implementation of a process analysis tool that replays, analyzes and visualizes a variety of performance metrics using a process definition and its execution logs. Performing performance analysis on existing and planned process models offers a great way for organizations to detect bottlenecks within their processes and allow them to make more effective process improvement decisions. Our technique is applied to processes modeled in the YAWL language. Execution logs of process instances are compared against the corresponding YAWL process model and replayed in a robust manner, taking into account any noise in the logs. Finally, performance characteristics, obtained from replaying the log in the model, are projected onto the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data collected at Canadian public housing estates in eastern Ontario are used here to analyze two hypotheses. Overall these women report more violence than do otherwise situated women in other general surveys. More specifically, complex theoretical models were designed to generate two hypotheses for further analysis: First, that separated/divorced women are more likely to be abused within public housing than married women. Second, that cohabiting women will report violence victimization at a higher rate than separated, divorced, or married women. Some support for both hypotheses were found, and the theoretical models are used to discuss these findings.