1000 resultados para lineup identification
Resumo:
Is it possible to control identities using performance management systems (PMSs)? This paper explores the theoretical fusion of management accounting and identity studies, providing a synthesised view of control, PMSs and identification processes. It argues that the effective use of PMSs generates a range of obtrusive mechanistic and unobtrusive organic controls that mediate identification processes to achieve a high level of identity congruency between individuals and collectives—groups and organisations. This paper contends that mechanistic control of PMSs provides sensebreaking effects and also creates structural conditions for sensegiving in top-down identification processes. These processes encourage individuals to continue the bottom-up processes of sensemaking, enacting identity and constructing identity narratives. Over time, PMS activities and conversations periodically mediate several episode(s) of identification to connect past, current and future identities. To explore this relationship, the dual locus of control—collectives and individuals—is emphasised to explicate their interplay. This multidisciplinary approach contributes to explaining the multidirectional effects of PMSs in obtrusive as well as unobtrusive ways, in order to control the nature of collectives and individuals in organisations.
Resumo:
Purpose: Colorectal cancer patients diagnosed with stage I or II disease are not routinely offered adjuvant chemotherapy following resection of the primary tumor. However, up to 10% of stage I and 30% of stage II patients relapse within 5 years of surgery from recurrent or metastatic disease. The aim of this study was to determine if tumor-associated markers could detect disseminated malignant cells and so identify a subgroup of patients with early-stage colorectal cancer that were at risk of relapse. Experimental Design: We recruited consecutive patients undergoing curative resection for early-stage colorectal cancer. Immunobead reverse transcription-PCR of five tumor-associated markers (carcinoembryonic antigen, laminin γ2, ephrin B4, matrilysin, and cytokeratin 20) was used to detect the presence of colon tumor cells in peripheral blood and within the peritoneal cavity of colon cancer patients perioperatively. Clinicopathologic variables were tested for their effect on survival outcomes in univariate analyses using the Kaplan-Meier method. A multivariate Cox proportional hazards regression analysis was done to determine whether detection of tumor cells was an independent prognostic marker for disease relapse. Results: Overall, 41 of 125 (32.8%) early-stage patients were positive for disseminated tumor cells. Patients who were marker positive for disseminated cells in post-resection lavage samples showed a significantly poorer prognosis (hazard ratio, 6.2; 95% confidence interval, 1.9-19.6; P = 0.002), and this was independent of other risk factors. Conclusion: The markers used in this study identified a subgroup of early-stage patients at increased risk of relapse post-resection for primary colorectal cancer. This method may be considered as a new diagnostic tool to improve the staging and management of colorectal cancer. © 2006 American Association for Cancer Research.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, very few attempts have been made to explore the structure damage with noise polluted data which is unavoidable effect in real world. The measurement data are contaminated by noise because of test environment as well as electronic devices and this noise tend to give error results with structural damage identification methods. Therefore it is important to investigate a method which can perform better with noise polluted data. This paper introduces a new damage index using principal component analysis (PCA) for damage detection of building structures being able to accept noise polluted frequency response functions (FRFs) as input. The FRF data are obtained from the function datagen of MATLAB program which is available on the web site of the IASC-ASCE (International Association for Structural Control– American Society of Civil Engineers) Structural Health Monitoring (SHM) Task Group. The proposed method involves a five-stage process: calculation of FRFs, calculation of damage index values using proposed algorithm, development of the artificial neural networks and introducing damage indices as input parameters and damage detection of the structure. This paper briefly describes the methodology and the results obtained in detecting damage in all six cases of the benchmark study with different noise levels. The proposed method is applied to a benchmark problem sponsored by the IASC-ASCE Task Group on Structural Health Monitoring, which was developed in order to facilitate the comparison of various damage identification methods. The illustrated results show that the PCA-based algorithm is effective for structural health monitoring with noise polluted FRFs which is of common occurrence when dealing with industrial structures.
Resumo:
The design-build (DB) system has been demonstrated as an effective delivery method and has gained popularity worldwide. However it is observed that a number of operational variations of DB system have emerged since the last decade to cater for different client’s requirements. After the client decides to procure his project through the DB system, he still has to choose an appropriate configuration to deliver their projects optimally. However, there is little research on the selection of DB operational variations. One of the main reasons for this is the lack of evaluation criteria for determining the appropriateness of each operational variation. To obtain such criteria, a three-round Delphi survey has been conducted with 20 construction experts in the People’s Republic of China (PRC). Seven top selection criteria were identified. These are: (1) availability of competent design-builders; (2) client’s capabilities; (3) project complexity; (4) client’s control of project; (5) early commencement & short duration; (6) reduced responsibility or involvement; and (7) clearly defined end user’s requirements. These selection criteria were found to have a statistically significant agreement. These findings may furnish various stakeholders, DB clients in particular, with better insight to understand and compare the different operational variations of the DB system.
Resumo:
Design-builders play a vital role in the success of DB projects. In the construction market of the People’s Republic of China, most of the design-builders, however, lack adequate competences to conduct the DB projects successfully. The objective of this study is, therefore, to identify the key competences that design-builders should possess to not only ensure the success of DB projects but also acquire the competitive advantages in the DB market. Five semi-structured face-to-face interviews and two rounds of Delphi questionnaire survey were conducted to identify the key competences of design-builders. Rankings have been assigned to these key competences on the basis of their relative importance. Six ranked key competences of design-builders have been identified, which are, namely, (1) experience with similar DB projects; (2) capability of corporate management; (3) combination of building techniques and design expertise; (4) financial capability for DB projects; (5) enterprise qualification and scale; and (6) credit records and reputation in the industry. The design-builders can make use of the research findings as guidelines to improve their DB competence. These research findings will also be useful to clients during the selection of design-builders.
Resumo:
The design-build system has been demonstrated as an effective delivery method and gained popularity worldwide. Although there are an increasing number of clients adopting DB method in China, most of them remain inexperienced with method. The objective of this study is therefore to identify the key competences that a client or its consultant should possess to ensure the success of DB projects. Face-to-face interviews and a two-round Delphi questionnaire survey were conducted to find the following six key competences of clients, which include the (1) ability to clearly articulate project scope and objectives; (2) financial capacity for DB projects; (3) capability in contract management; (4) adequate staff or consulting team; (5) effective coordination with contractors and (6) experience with similar DB projects. This study will hopefully provide clients with measures to evaluate their DB competence and further promote their understanding of DB system in the PRC.
Resumo:
Successful identification and exploitation of opportunities has been an area of interest to many entrepreneurship researchers. Since Shane and Venkataraman’s seminal work (e.g. Shane and Venkataraman, 2000; Shane, 2000), several scholars have theorised on how firms identify, nurture and develop opportunities. The majority of this literature has been devoted to understanding how entrepreneurs search for new applications of their technological base or discover opportunities based on prior knowledge (Zahra, 2008; Sarasvathy et al., 2003). In particular, knowledge about potential customer needs and problems that may present opportunities is vital (Webb et al., 2010). Whereas the role of prior knowledge of customer problems (Shane, 2003; Shepherd and DeTienne, 2005) and positioning oneself in a so-called knowledge corridor (Fiet, 1996) has been researched, the role of opportunity characteristics and their interaction with customer-related mechanisms that facilitate and hinder opportunity identification has received scant attention.
Resumo:
In most materials, short stress waves are generated during the process of plastic deformation, phase transformation, crack formation and crack growth. These phenomena are applied in acoustic emission (AE) for the detection of material defects in wide spectrum areas, ranging from non-destructive testing for the detection of materials defects to monitoring of microeismical activity. AE technique is also used for defect source identification and for failure detection. AE waves consist of P waves (primary/longitudinal waves), S waves (shear/transverse waves) and Rayleight (surface) waves as well as reflected and diffracted waves. The propagation of AE waves in various modes has made the determination of source location difficult. In order to use the acoustic emission technique for accurate identification of source location, an understanding of wave propagation of the AE signals at various locations in a plate structure is essential. Furthermore, an understanding of wave propagation can also assist in sensor location for optimum detection of AE signals. In real life, as the AE signals radiate from the source it will result in stress waves. Unless the type of stress wave is known, it is very difficult to locate the source when using the classical propagation velocity equations. This paper describes the simulation of AE waves to identify the source location in steel plate as well as the wave modes. The finite element analysis (FEA) is used for the numerical simulation of wave propagation in thin plate. By knowing the type of wave generated, it is possible to apply the appropriate wave equations to determine the location of the source. For a single plate structure, the results show that the simulation algorithm is effective to simulate different stress waves.
Resumo:
Significant numbers of children are severely abused and neglected by parents and caregivers. Infants and very young children are the most vulnerable and are unable to seek help. To identify these situations and enable child protection and the provision of appropriate assistance, many jurisdictions have enacted ‘mandatory reporting laws’ requiring designated professionals such as doctors, nurses, police and teachers to report suspected cases of severe child abuse and neglect. Other jurisdictions have not adopted this legislative approach, at least partly motivated by a concern that the laws produce dramatic increases in unwarranted reports, which, it is argued, lead to investigations which infringe on people’s privacy, cause trauma to innocent parents and families, and divert scarce government resources from deserving cases. The primary purpose of this paper is to explore the extent to which opposition to mandatory reporting laws is valid based on the claim that the laws produce ‘overreporting’. The first part of this paper revisits the original mandatory reporting laws, discusses their development into various current forms, explains their relationship with policy and common law reporting obligations, and situates them in the context of their place in modern child protection systems. This part of the paper shows that in general, contemporary reporting laws have expanded far beyond their original conceptualisation, but that there is also now a deeper understanding of the nature, incidence, timing and effects of different types of severe maltreatment, an awareness that the real incidence of maltreatment is far higher than that officially recorded, and that there is strong evidence showing the majority of identified cases of severe maltreatment are the result of reports by mandated reporters. The second part of this paper discusses the apparent effect of mandatory reporting laws on ‘overreporting’ by referring to Australian government data about reporting patterns and outcomes, with a particular focus on New South Wales. It will be seen that raw descriptive data about report numbers and outcomes appear to show that reporting laws produce both desirable consequences (identification of severe cases) and problematic consequences (increased numbers of unsubstantiated reports). Yet, to explore the extent to which the data supports the overreporting claim, and because numbers of unsubstantiated reports alone cannot demonstrate overreporting, this part of the paper asks further questions of the data. Who makes reports, about which maltreatment types, and what are the outcomes of those reports? What is the nature of these reports; for example, to what extent are multiple numbers of reports made about the same child? What meaning can be attached to an ‘unsubstantiated’ report, and can such reports be used to show flaws in reporting effectiveness and problems in reporting laws? It will be suggested that available evidence from Australia is not sufficiently detailed or strong to demonstrate the overreporting claim. However, it is also apparent that, whether adopting an approach based on public health and or other principles, much better evidence about reporting needs to be collected and analyzed. As well, more nuanced research needs to be conducted to identify what can reasonably be said to constitute ‘overreports’, and efforts must be made to minimize unsatisfactory reporting practice, informed by the relevant jurisdiction’s context and aims. It is also concluded that, depending on the jurisdiction, the available data may provide useful indicators of positive, negative and unanticipated effects of specific components of the laws, and of the strengths, weaknesses and needs of the child protection system.
Resumo:
In this paper, we report on many phosphate containing natural minerals found in the Jenolan Caves - Australia. Such minerals are formed by the reaction of bat guano and clays from the caves. Among these cave minerals is the montgomeryite mineral [Ca4MgAl4(PO4)6.(OH)4.12H2O]. The presence of montgomeryite in deposits of the Jenolan Caves - Australia has been identified by X-ray diffraction (XRD). Raman spectroscopy complimented with infrared spectroscopy has been used to characterize the crystal structure of montgomeryite. The Raman spectrum of a standard montgomeryite mineral is identical to that of the Jenolan Caves sample. Bands are assigned to H2PO4-, OH and NH stretching vibrations. By using a combination of XRD and Raman spectroscopy, the existence of montgomeryite in the Jenolan Caves - Australia has been proven. A mechanism for the formation of montgomeryite is proposed.
Resumo:
Purpose: The construction industry is well known for its high accident rate and many practitioners consider a preventative approach to be the most important means of bringing about improvements. This paper addresses previous research and the weaknesses of existing preventative approaches and a new application is described and illustrated involving the use of a multi-dimensional simulation tool - Construction Virtual Prototyping (CVP). Methodology: A literature review was conducted to investigate previous studies of hazard identification and safety management and to develop the new approach. Due to weaknesses in current practice, the research study explored the use of computer simulation techniques to create virtual environments where users can explore and identify construction hazards. Specifically, virtual prototyping technology was deployed to develop typical construction scenarios in which unsafe or hazardous incidents occur. In a case study, the users’ performance was evaluated their responses to incidents within the virtual environment and the effectiveness of the computer simulation system established though interviews with the safety project management team. Findings: The opinions and suggestions provided by the interviewees led to the initial conclusion that the simulation tool was useful in assisting the safety management team’s hazard identification process during the early design stage. Originality: The research introduces an innovative method to support the management teams’ reviews of construction site safety. The system utilises three-dimensional modelling and four-dimensional simulation of worker behaviour, a configuration that has previously not been employed in construction simulations. An illustration of the method’s use is also provided, together with a consideration of its strengths and weaknesses.
Resumo:
House dust is a heterogeneous matrix, which contains a number of biological materials and particulate matter gathered from several sources. It is the accumulation of a number of semi-volatile and non-volatile contaminants. The contaminants are trapped and preserved. Therefore, house dust can be viewed as an archive of both the indoor and outdoor air pollution. There is evidence to show that on average, people tend to stay indoors most of the time and this increases exposure to house dust. The aims of this investigation were to: " assess the levels of Polycyclic Aromatic Hydrocarbons (PAHs), elements and pesticides in the indoor environment of the Brisbane area; " identify and characterise the possible sources of elemental constituents (inorganic elements), PAHs and pesticides by means of Positive Matrix Factorisation (PMF); and " establish the correlations between the levels of indoor air pollutants (PAHs, elements and pesticides) with the external and internal characteristics or attributes of the buildings and indoor activities by means of multivariate data analysis techniques. The dust samples were collected during the period of 2005-2007 from homes located in different suburbs of Brisbane, Ipswich and Toowoomba, in South East Queensland, Australia. A vacuum cleaner fitted with a paper bag was used as a sampler for collecting the house dust. A survey questionnaire was filled by the house residents which contained information about the indoor and outdoor characteristics of their residences. House dust samples were analysed for three different pollutants: Pesticides, Elements and PAHs. The analyses were carried-out for samples of particle size less than 250 µm. The chemical analyses for both pesticides and PAHs were performed using a Gas Chromatography Mass Spectrometry (GC-MS), while elemental analysis was carried-out by using Inductively-Coupled Plasma-Mass Spectroscopy (ICP-MS). The data was subjected to multivariate data analysis techniques such as multi-criteria decision-making procedures, Preference Ranking Organisation Method for Enrichment Evaluations (PROMETHEE), coupled with Geometrical Analysis for Interactive Aid (GAIA) in order to rank the samples and to examine data display. This study showed that compared to the results from previous works, which were carried-out in Australia and overseas, the concentrations of pollutants in house dusts in Brisbane and the surrounding areas were relatively very high. The results of this work also showed significant correlations between some of the physical parameters (types of building material, floor level, distance from industrial areas and major road, and smoking) and the concentrations of pollutants. Types of building materials and the age of houses were found to be two of the primary factors that affect the concentrations of pesticides and elements in house dust. The concentrations of these two types of pollutant appear to be higher in old houses (timber houses) than in the brick ones. In contrast, the concentrations of PAHs were noticed to be higher in brick houses than in the timber ones. Other factors such as floor level, and distance from the main street and industrial area, also affected the concentrations of pollutants in the house dust samples. To apportion the sources and to understand mechanisms of pollutants, Positive Matrix Factorisation (PMF) receptor model was applied. The results showed that there were significant correlations between the degree of concentration of contaminants in house dust and the physical characteristics of houses, such as the age and the type of the house, the distance from the main road and industrial areas, and smoking. Sources of pollutants were identified. For PAHs, the sources were cooking activities, vehicle emissions, smoking, oil fumes, natural gas combustion and traces of diesel exhaust emissions; for pesticides the sources were application of pesticides for controlling termites in buildings and fences, treating indoor furniture and in gardens for controlling pests attacking horticultural and ornamental plants; for elements the sources were soil, cooking, smoking, paints, pesticides, combustion of motor fuels, residual fuel oil, motor vehicle emissions, wearing down of brake linings and industrial activities.
Resumo:
Raman spectroscopy, when used in spatially offset mode, has become a potential tool for the identification of explosives and other hazardous substances concealed in opaque containers. The molecular fingerprinting capability of Raman spectroscopy makes it an attractive tool for the unambiguous identification of hazardous substances in the field. Additionally, minimal sample preparation is required compared with other techniques. We report a field portable time resolved Raman sensor for the detection of concealed chemical hazards in opaque containers. The new sensor uses a pulsed nanosecond laser source in conjunction with an intensified CCD detector. The new sensor employs a combination of time and space resolved Raman spectroscopy to enhance the detection capability. The new sensor can identify concealed hazards by a single measurement without any chemometric data treatments.
Resumo:
We report an inverse Spatially Offset Raman Spectrometer capable of non-invasively identifying packaged substances from a distance. Usual inverse SORS spectrometer has a non-contact distance that is equivalent to the focal distance of the collection system. In this work we demonstrate the defocused geometry with a modified data analysis method capable of making inverse SORS measurements from a distance greater than the focal distance of the collection lenses. With the defocused geometry we were able to detect acetaminophen, concealed inside a 2 mm thick plastic bottle, at a non-contact distance of 30 cm.
Resumo:
Experts’ views and commentary have been highly respected in every discipline. However, unlike traditional disciplines like medicine, mathematics and engineering, Information System (IS) expertise is difficult to define. Despite seeking expert advice and views is common in the areas of IS project management, system implementations and evaluations. This research-in-progress paper attempts to understand the characteristics of IS-expert through a comprehensive literature review of analogous disciplines and then derives a formative research model with three main constructs. A validated construct of expertise of IS will have a wide range of implications for research and practice.