428 resultados para Computational Identification
Resumo:
Recent studies on automatic new topic identification in Web search engine user sessions demonstrated that neural networks are successful in automatic new topic identification. However most of this work applied their new topic identification algorithms on data logs from a single search engine. In this study, we investigate whether the application of neural networks for automatic new topic identification are more successful on some search engines than others. Sample data logs from the Norwegian search engine FAST (currently owned by Overture) and Excite are used in this study. Findings of this study suggest that query logs with more topic shifts tend to provide more successful results on shift-based performance measures, whereas logs with more topic continuations tend to provide better results on continuation-based performance measures.
Resumo:
Is it possible to control identities using performance management systems (PMSs)? This paper explores the theoretical fusion of management accounting and identity studies, providing a synthesised view of control, PMSs and identification processes. It argues that the effective use of PMSs generates a range of obtrusive mechanistic and unobtrusive organic controls that mediate identification processes to achieve a high level of identity congruency between individuals and collectives—groups and organisations. This paper contends that mechanistic control of PMSs provides sensebreaking effects and also creates structural conditions for sensegiving in top-down identification processes. These processes encourage individuals to continue the bottom-up processes of sensemaking, enacting identity and constructing identity narratives. Over time, PMS activities and conversations periodically mediate several episode(s) of identification to connect past, current and future identities. To explore this relationship, the dual locus of control—collectives and individuals—is emphasised to explicate their interplay. This multidisciplinary approach contributes to explaining the multidirectional effects of PMSs in obtrusive as well as unobtrusive ways, in order to control the nature of collectives and individuals in organisations.
Resumo:
The relationship between weather and mortality has been observed for centuries. Recently, studies on temperature-related mortality have become a popular topic as climate change continues. Most of the previous studies found that exposure to hot or cold temperature affects mortality. This study aims to address three research questions: 1. What is the overall effect of daily mean temperature variation on the elderly mortality in the published literature using a meta-analysis approach? 2. Does the association between temperature and mortality differ with age, sex, or socio-economic status in Brisbane? 3. How is the magnitude of the lag effects of the daily mean temperature on mortality varied by age and cause-of-death groups in Brisbane? In the meta-analysis, there was a 1-2 % increase in all-cause mortality for a 1ºC decrease during cold temperature intervals and a 2-5% increase for a 1ºC increment during hot temperature intervals among the elderly. Lags of up to 9 days in exposure to cold temperature intervals were statistically significantly associated with all-cause mortality, but no significant lag effects were observed for hot temperature intervals. In Brisbane, the harmful effect of high temperature (over 24ºC) on mortality appeared to be greater among the elderly than other age groups. The effect estimate among women was greater than among men. However, No evidence was found that socio-economic status modified the temperature-mortality relationship. The results of this research also show longer lag effects in cold days and shorter lag effects in hot days. For 3-day hot effects associated with 1°C increase above the threshold, the highest percent increases in mortality occurred among people aged 85 years or over (5.4% (95% CI: 1.4%, 9.5%)) compared with all age group (3.2% (95% CI: 0.9%, 5.6%)). The effect estimate among cardiovascular deaths was slightly higher than those among all-cause mortality. For overall 21-day cold effects associated with a 1°C decrease below the threshold, the percent estimates in mortality for people aged 85 years or over, and from cardiovascular diseases were 3.9% (95% CI: 1.9%, 6.0%) and 3.4% (95% CI: 0.9%, 6.0%), respectively compared with all age group (2.0% (95% CI: 0.7%, 3.3%)). Little research of this kind has been conducted in the Southern Hemisphere. This PhD research may contribute to the quantitative assessment of the overall impact, effect modification and lag effects of temperature variation on mortality in Australia and The findings may provide useful information for the development and implementation of public health policies to reduce and prevent temperature-related health problems.
Resumo:
Is it possible to control identities using performance management systems (PMSs)? This paper explores the theoretical fusion of management accounting and identity studies, providing a synthesised view of control, PMSs and identification processes. It argues that the effective use of PMSs generates a range of obtrusive mechanistic and unobtrusive organic controls that mediate identification processes to achieve a high level of identity congruency between individuals and collectives—groups and organisations. This paper contends that mechanistic control of PMSs provides sensebreaking effects and also creates structural conditions for sensegiving in top-down identification processes. These processes encourage individuals to continue the bottom-up processes of sensemaking, enacting identity and constructing identity narratives. Over time, PMS activities and conversations periodically mediate several episode(s) of identification to connect past, current and future identities. To explore this relationship, the dual locus of control—collectives and individuals—is emphasised to explicate their interplay. This multidisciplinary approach contributes to explaining the multidirectional effects of PMSs in obtrusive as well as unobtrusive ways, in order to control the nature of collectives and individuals in organisations.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
Purpose: Colorectal cancer patients diagnosed with stage I or II disease are not routinely offered adjuvant chemotherapy following resection of the primary tumor. However, up to 10% of stage I and 30% of stage II patients relapse within 5 years of surgery from recurrent or metastatic disease. The aim of this study was to determine if tumor-associated markers could detect disseminated malignant cells and so identify a subgroup of patients with early-stage colorectal cancer that were at risk of relapse. Experimental Design: We recruited consecutive patients undergoing curative resection for early-stage colorectal cancer. Immunobead reverse transcription-PCR of five tumor-associated markers (carcinoembryonic antigen, laminin γ2, ephrin B4, matrilysin, and cytokeratin 20) was used to detect the presence of colon tumor cells in peripheral blood and within the peritoneal cavity of colon cancer patients perioperatively. Clinicopathologic variables were tested for their effect on survival outcomes in univariate analyses using the Kaplan-Meier method. A multivariate Cox proportional hazards regression analysis was done to determine whether detection of tumor cells was an independent prognostic marker for disease relapse. Results: Overall, 41 of 125 (32.8%) early-stage patients were positive for disseminated tumor cells. Patients who were marker positive for disseminated cells in post-resection lavage samples showed a significantly poorer prognosis (hazard ratio, 6.2; 95% confidence interval, 1.9-19.6; P = 0.002), and this was independent of other risk factors. Conclusion: The markers used in this study identified a subgroup of early-stage patients at increased risk of relapse post-resection for primary colorectal cancer. This method may be considered as a new diagnostic tool to improve the staging and management of colorectal cancer. © 2006 American Association for Cancer Research.
Resumo:
We report on analysis of discussions in an online community of people with chronic illness using socio-cognitively motivated, automatically produced semantic spaces. The analysis aims to further the emerging theory of "transition" (how people can learn to incorporate the consequences of illness into their lives). An automatically derived representation of sense of self for individuals is created in the semantic space by the analysis of the email utterances of the community members. The movement over time of the sense of self is visualised, via projection, with respect to axes of "ordinariness" and "extra-ordinariness". Qualitative evaluation shows that the visualisation is paralleled by the transitions of people during the course of their illness. The research aims to progress tools for analysis of textual data to promote greater use of tacit knowledge as found in online virtual communities. We hope it also encourages further interest in representation of sense-of-self.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.
Resumo:
The design-build (DB) system has been demonstrated as an effective delivery method and has gained popularity worldwide. However it is observed that a number of operational variations of DB system have emerged since the last decade to cater for different client’s requirements. After the client decides to procure his project through the DB system, he still has to choose an appropriate configuration to deliver their projects optimally. However, there is little research on the selection of DB operational variations. One of the main reasons for this is the lack of evaluation criteria for determining the appropriateness of each operational variation. To obtain such criteria, a three-round Delphi survey has been conducted with 20 construction experts in the People’s Republic of China (PRC). Seven top selection criteria were identified. These are: (1) availability of competent design-builders; (2) client’s capabilities; (3) project complexity; (4) client’s control of project; (5) early commencement & short duration; (6) reduced responsibility or involvement; and (7) clearly defined end user’s requirements. These selection criteria were found to have a statistically significant agreement. These findings may furnish various stakeholders, DB clients in particular, with better insight to understand and compare the different operational variations of the DB system.
Resumo:
Design-builders play a vital role in the success of DB projects. In the construction market of the People’s Republic of China, most of the design-builders, however, lack adequate competences to conduct the DB projects successfully. The objective of this study is, therefore, to identify the key competences that design-builders should possess to not only ensure the success of DB projects but also acquire the competitive advantages in the DB market. Five semi-structured face-to-face interviews and two rounds of Delphi questionnaire survey were conducted to identify the key competences of design-builders. Rankings have been assigned to these key competences on the basis of their relative importance. Six ranked key competences of design-builders have been identified, which are, namely, (1) experience with similar DB projects; (2) capability of corporate management; (3) combination of building techniques and design expertise; (4) financial capability for DB projects; (5) enterprise qualification and scale; and (6) credit records and reputation in the industry. The design-builders can make use of the research findings as guidelines to improve their DB competence. These research findings will also be useful to clients during the selection of design-builders.
Resumo:
The design-build system has been demonstrated as an effective delivery method and gained popularity worldwide. Although there are an increasing number of clients adopting DB method in China, most of them remain inexperienced with method. The objective of this study is therefore to identify the key competences that a client or its consultant should possess to ensure the success of DB projects. Face-to-face interviews and a two-round Delphi questionnaire survey were conducted to find the following six key competences of clients, which include the (1) ability to clearly articulate project scope and objectives; (2) financial capacity for DB projects; (3) capability in contract management; (4) adequate staff or consulting team; (5) effective coordination with contractors and (6) experience with similar DB projects. This study will hopefully provide clients with measures to evaluate their DB competence and further promote their understanding of DB system in the PRC.
Resumo:
Successful identification and exploitation of opportunities has been an area of interest to many entrepreneurship researchers. Since Shane and Venkataraman’s seminal work (e.g. Shane and Venkataraman, 2000; Shane, 2000), several scholars have theorised on how firms identify, nurture and develop opportunities. The majority of this literature has been devoted to understanding how entrepreneurs search for new applications of their technological base or discover opportunities based on prior knowledge (Zahra, 2008; Sarasvathy et al., 2003). In particular, knowledge about potential customer needs and problems that may present opportunities is vital (Webb et al., 2010). Whereas the role of prior knowledge of customer problems (Shane, 2003; Shepherd and DeTienne, 2005) and positioning oneself in a so-called knowledge corridor (Fiet, 1996) has been researched, the role of opportunity characteristics and their interaction with customer-related mechanisms that facilitate and hinder opportunity identification has received scant attention.
Resumo:
In this article we identify how computational automation achieved through programming has enabled a new class of music technologies with generative music capabilities. These generative systems can have a degree of music making autonomy that impacts on our relationships with them; we suggest that this coincides with a shift in the music-equipment relationship from tool use to a partnership. This partnership relationship can occur when we use technologies that display qualities of agency. It raises questions about the kinds of skills and knowledge that are necessary to interact musically in such a partnership. These are qualities of musicianship we call eBility. In this paper we seek to define what eBility might consist of and how consideration of it might effect music education practice. The 'e' in eBility refers not only to the electronic nature of computing systems but also to the ethical, enabling, experiential and educational dimensions of the creative relationship with technologies with agency. We hope to initiate a discussion around differentiating what we term representational technologies from those with agency and begin to uncover the implications of these ideas for music educators in schools and communities. We hope also to elucidate the emergent theory and practice that has enabled the development of strategies for optimising this kind of eBility where the tool becomes partner. The identification of musical technologies with agency adds to the authors’ list of metaphors for technology use in music education that previously included tool, medium and instrument. We illustrate these ideas with examples and with data from our work with the jam2jam interactive music system. In this discussion we will outline our experiences with jam2jam as an example of a technology with agency and describe the aspects of eBility that interaction with it promotes.
Resumo:
In most materials, short stress waves are generated during the process of plastic deformation, phase transformation, crack formation and crack growth. These phenomena are applied in acoustic emission (AE) for the detection of material defects in wide spectrum areas, ranging from non-destructive testing for the detection of materials defects to monitoring of microeismical activity. AE technique is also used for defect source identification and for failure detection. AE waves consist of P waves (primary/longitudinal waves), S waves (shear/transverse waves) and Rayleight (surface) waves as well as reflected and diffracted waves. The propagation of AE waves in various modes has made the determination of source location difficult. In order to use the acoustic emission technique for accurate identification of source location, an understanding of wave propagation of the AE signals at various locations in a plate structure is essential. Furthermore, an understanding of wave propagation can also assist in sensor location for optimum detection of AE signals. In real life, as the AE signals radiate from the source it will result in stress waves. Unless the type of stress wave is known, it is very difficult to locate the source when using the classical propagation velocity equations. This paper describes the simulation of AE waves to identify the source location in steel plate as well as the wave modes. The finite element analysis (FEA) is used for the numerical simulation of wave propagation in thin plate. By knowing the type of wave generated, it is possible to apply the appropriate wave equations to determine the location of the source. For a single plate structure, the results show that the simulation algorithm is effective to simulate different stress waves.
Resumo:
Significant numbers of children are severely abused and neglected by parents and caregivers. Infants and very young children are the most vulnerable and are unable to seek help. To identify these situations and enable child protection and the provision of appropriate assistance, many jurisdictions have enacted ‘mandatory reporting laws’ requiring designated professionals such as doctors, nurses, police and teachers to report suspected cases of severe child abuse and neglect. Other jurisdictions have not adopted this legislative approach, at least partly motivated by a concern that the laws produce dramatic increases in unwarranted reports, which, it is argued, lead to investigations which infringe on people’s privacy, cause trauma to innocent parents and families, and divert scarce government resources from deserving cases. The primary purpose of this paper is to explore the extent to which opposition to mandatory reporting laws is valid based on the claim that the laws produce ‘overreporting’. The first part of this paper revisits the original mandatory reporting laws, discusses their development into various current forms, explains their relationship with policy and common law reporting obligations, and situates them in the context of their place in modern child protection systems. This part of the paper shows that in general, contemporary reporting laws have expanded far beyond their original conceptualisation, but that there is also now a deeper understanding of the nature, incidence, timing and effects of different types of severe maltreatment, an awareness that the real incidence of maltreatment is far higher than that officially recorded, and that there is strong evidence showing the majority of identified cases of severe maltreatment are the result of reports by mandated reporters. The second part of this paper discusses the apparent effect of mandatory reporting laws on ‘overreporting’ by referring to Australian government data about reporting patterns and outcomes, with a particular focus on New South Wales. It will be seen that raw descriptive data about report numbers and outcomes appear to show that reporting laws produce both desirable consequences (identification of severe cases) and problematic consequences (increased numbers of unsubstantiated reports). Yet, to explore the extent to which the data supports the overreporting claim, and because numbers of unsubstantiated reports alone cannot demonstrate overreporting, this part of the paper asks further questions of the data. Who makes reports, about which maltreatment types, and what are the outcomes of those reports? What is the nature of these reports; for example, to what extent are multiple numbers of reports made about the same child? What meaning can be attached to an ‘unsubstantiated’ report, and can such reports be used to show flaws in reporting effectiveness and problems in reporting laws? It will be suggested that available evidence from Australia is not sufficiently detailed or strong to demonstrate the overreporting claim. However, it is also apparent that, whether adopting an approach based on public health and or other principles, much better evidence about reporting needs to be collected and analyzed. As well, more nuanced research needs to be conducted to identify what can reasonably be said to constitute ‘overreports’, and efforts must be made to minimize unsatisfactory reporting practice, informed by the relevant jurisdiction’s context and aims. It is also concluded that, depending on the jurisdiction, the available data may provide useful indicators of positive, negative and unanticipated effects of specific components of the laws, and of the strengths, weaknesses and needs of the child protection system.