879 resultados para Symptom dimension
Resumo:
In their 2010 study drawing on 500 empirical philanthropy studies, Bekkers and Wiepking identified eight consistently significant giving mechanisms. The pilot study reported here extends what is known about one mechanism, values, as a giving driver, in particular considering how national cultural values apply to giving. Personal values are not formed in a vacuum. They are influenced by the wider culture and society: thus values have a socio-cultural dimension. Accordingly, this pilot research draws on media theory and cultural studies work on national ethos to explore how these national cultural values interact with giving. A directed qualitative content analysis has been undertaken to compare US and Australian print media coverage about philanthropy. The two nations share an Anglo–Saxon orientation but differ significantly in national character and philanthropic activity. This study posits that a nation's media coverage about giving will reflect its national cultural ethos. This coverage can also shape personal values, thus implications exist for theory about the antecedents of personal giving values. Wider national values may drive or stifle giving, so this wider view of values as a driver has implications also for philanthropy promotion and fundraising.
Resumo:
This research is one of several ongoing studies conducted within the IT Professional Services (ITPS) research programme at Queensland University of Technology (QUT). In 2003, ITPS introduced the IS-Impact model, a measurement model for measuring information systems success from the viewpoint of multiple stakeholders. The model, along with its instrument, is robust, simple, yet generalisable, and yields results that are comparable across time, stakeholders, different systems and system contexts. The IS-Impact model is defined as “a measure at a point in time, of the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. The model represents four dimensions, which are ‘Individual Impact’, ‘Organizational Impact’, ‘Information Quality’ and ‘System Quality’. The two Impact dimensions measure the up-to-date impact of the evaluated system, while the remaining two Quality dimensions act as proxies for probable future impacts (Gable, Sedera & Chan, 2008). To fulfil the goal of ITPS, “to develop the most widely employed model” this research re-validates and extends the IS-Impact model in a new context. This method/context-extension research aims to test the generalisability of the model by addressing known limitations of the model. One of the limitations of the model relates to the extent of external validity of the model. In order to gain wide acceptance, a model should be consistent and work well in different contexts. The IS-Impact model, however, was only validated in the Australian context, and packaged software was chosen as the IS understudy. Thus, this study is concerned with whether the model can be applied in another different context. Aiming for a robust and standardised measurement model that can be used across different contexts, this research re-validates and extends the IS-Impact model and its instrument to public sector organisations in Malaysia. The overarching research question (managerial question) of this research is “How can public sector organisations in Malaysia measure the impact of information systems systematically and effectively?” With two main objectives, the managerial question is broken down into two specific research questions. The first research question addresses the applicability (relevance) of the dimensions and measures of the IS-Impact model in the Malaysian context. Moreover, this research question addresses the completeness of the model in the new context. Initially, this research assumes that the dimensions and measures of the IS-Impact model are sufficient for the new context. However, some IS researchers suggest that the selection of measures needs to be done purposely for different contextual settings (DeLone & McLean, 1992, Rai, Lang & Welker, 2002). Thus, the first research question is as follows, “Is the IS-Impact model complete for measuring the impact of IS in Malaysian public sector organisations?” [RQ1]. The IS-Impact model is a multidimensional model that consists of four dimensions or constructs. Each dimension is represented by formative measures or indicators. Formative measures are known as composite variables because these measures make up or form the construct, or, in this case, the dimension in the IS-Impact model. These formative measures define different aspects of the dimension, thus, a measurement model of this kind needs to be tested not just on the structural relationship between the constructs but also the validity of each measure. In a previous study, the IS-Impact model was validated using formative validation techniques, as proposed in the literature (i.e., Diamantopoulos and Winklhofer, 2001, Diamantopoulos and Siguaw, 2006, Petter, Straub and Rai, 2007). However, there is potential for improving the validation testing of the model by adding more criterion or dependent variables. This includes identifying a consequence of the IS-Impact construct for the purpose of validation. Moreover, a different approach is employed in this research, whereby the validity of the model is tested using the Partial Least Squares (PLS) method, a component-based structural equation modelling (SEM) technique. Thus, the second research question addresses the construct validation of the IS-Impact model; “Is the IS-Impact model valid as a multidimensional formative construct?” [RQ2]. This study employs two rounds of surveys, each having a different and specific aim. The first is qualitative and exploratory, aiming to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. This survey was conducted in a state government in Malaysia. A total of 77 valid responses were received, yielding 278 impact statements. The results from the qualitative analysis demonstrate the applicability of most of the IS-Impact measures. The analysis also shows a significant new measure having emerged from the context. This new measure was added as one of the System Quality measures. The second survey is a quantitative survey that aims to operationalise the measures identified from the qualitative analysis and rigorously validate the model. This survey was conducted in four state governments (including the state government that was involved in the first survey). A total of 254 valid responses were used in the data analysis. Data was analysed using structural equation modelling techniques, following the guidelines for formative construct validation, to test the validity and reliability of the constructs in the model. This study is the first research that extends the complete IS-Impact model in a new context that is different in terms of nationality, language and the type of information system (IS). The main contribution of this research is to present a comprehensive, up-to-date IS-Impact model, which has been validated in the new context. The study has accomplished its purpose of testing the generalisability of the IS-Impact model and continuing the IS evaluation research by extending it in the Malaysian context. A further contribution is a validated Malaysian language IS-Impact measurement instrument. It is hoped that the validated Malaysian IS-Impact instrument will encourage related IS research in Malaysia, and that the demonstrated model validity and generalisability will encourage a cumulative tradition of research previously not possible. The study entailed several methodological improvements on prior work, including: (1) new criterion measures for the overall IS-Impact construct employed in ‘identification through measurement relations’; (2) a stronger, multi-item ‘Satisfaction’ construct, employed in ‘identification through structural relations’; (3) an alternative version of the main survey instrument in which items are randomized (rather than blocked) for comparison with the main survey data, in attention to possible common method variance (no significant differences between these two survey instruments were observed); (4) demonstrates a validation process of formative indexes of a multidimensional, second-order construct (existing examples mostly involved unidimensional constructs); (5) testing the presence of suppressor effects that influence the significance of some measures and dimensions in the model; and (6) demonstrates the effect of an imbalanced number of measures within a construct to the contribution power of each dimension in a multidimensional model.
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
Resumo:
In a little over twenty years, Australia has succeeded in developing an industry in international education worth $15.5 billion Australian dollars. Most universities engaged in this industry see themselves as integrating an international, intercultural or global dimension into teaching, research and service. We examine the internationalization challenges faced by Australia's universities, and explore how curriculum and mobility are understood by academics interviewed at two case study universities.
Resumo:
A cDNA corresponding to a transcript induced in culture by N starvation, was identified in Colletotrichum gloeosporioides by a differential hybridisation strategy. The cDNA comprised 905 bp and predicted a 215 aa protein; the gene encoding the cDNA was termed CgDN24. No function for CgDN24 could be predicted by database homology searches using the cDNA sequence and no homologues were found in the sequenced fungal genomes. Transcripts of CgDN24 were detected in infected leaves of Stylosanthes guianensis at stages of infection that corresponded with symptom development. The CgDN24 gene was disrupted by homologous recombination and this led to reduced radial growth rates and the production of hyphae with a hyperbranching phenotype. Normal sporulation was observed, and following conidial inoculation of S. guianensis, normal disease development was obtained. These results demonstrate that CgDN24 is necessary for normal hyphal development in axenic culture but dispensable for phytopathogenicity. © 2005 Elsevier GmbH. All rights reserved.
Resumo:
The in vitro and in vivo degradation properties of poly(lactic-co-glycolic acid) (PLGA) scaffolds produced by two different technologies - thermally induced phase separation (TIPS), and solvent casting and particulate leaching (SCPL) were compared. Over 6 weeks, in vitro degradation produced changes in SCPL scaffold dimension, mass, internal architecture and mechanical properties. TIPS scaffolds produced far less changes in these parameters providing significant advantages over SCPL. In vivo results were based on a microsurgically created arteriovenous (AV) loop sandwiched between two TIPS scaffolds placed in a polycarbonate chamber under rat groin skin. Histologically, a predominant foreign body giant cell response and reduced vascularity was evident in tissue ingrowth between 2 and 8 weeks in TIPS scaffolds. Tissue death occurred at 8 weeks in the smallest pores. Morphometric comparison of TIPS and SCPL scaffolds indicated slightly better tissue ingrowth but greater loss of scaffold structure in SCPL scaffolds. Although advantageous in vitro, large surface area:volume ratios and varying pore sizes in PLGA TIPS scaffolds mean that effective in vivo (AV loop) utilization will only be achieved if the foreign body response can be significantly reduced so as to allow successful vascularisation, and hence sustained tissue growth, in pores less than 300 μm. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
Relics is a single-channel video derived from a 3D computer animation that combines a range of media including photography, drawing, painting, and pre-shot video. It is constructed around a series of pictorial stills which become interlinked by the more traditionally filmic processes of panning, zooming and crane shots. In keeping with these ideas, the work revolves around a series of static architectural forms within the strangely menacing enclosure of a geodesic dome. These clinical aspects of the work are complemented by a series of elements that evoke fluidity : fireworks, mirrored biomorphic forms and oscillating projections. The visual dimension of the work is complemented by a soundtrack of rainforest bird calls. Through its ambiguous combination of recorded and virtual imagery, Relics explores the indeterminate boundaries between real and virtual space. On the one hand, it represents actual events and spaces drawn from the artist studio and image archive; on the other it represents the highly idealised spaces of drawing and 3D animation. In this work the disembodied wandering virtual eye is met with an uncanny combination of scenes, where scale and the relationships between objects are disrupted and changed. Through this simultaneity between the real and the virtual, the work conveys a disembodied sense of space and time that carries a powerful sense of affect. Relics was among the first international examples of 3D animation technology in contemporary art. It was originally exhibited in the artist’s solo show, ‘Places That Don’t Exist’ (2007, George Petelin Gallery, Gold Coast) and went on to be included in the group shows ‘d/Art 07/Screen: The Post Cinema Experience’ (2007, Chauvel Cinema, Sydney) , ‘Experimenta Utopia Now: International Biennial of Media Art’ (2010, Arts Centre, Melbourne and national touring venues) and ‘Move on Asia’ (2009, Alternative space Loop, Seoul and Para-site Art Space, Hong Kong) and was broadcast on Souvenirs from Earth (Video Art Cable Channel, Germany and France). The work was analysed in catalogue texts for ‘Places That Don’t Exist’ (2007), ‘d/Art 07’ (2007) and ‘Experimenta Utopia Now’ (2010) and the’ Souvenirs from Earth’ website.
Resumo:
Current research in secure messaging for Vehicular Ad hoc Networks (VANETs) appears to focus on employing a digital certificate-based Public Key Cryptosystem (PKC) to support security. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This paper proposes a non-certificate-based public key management for VANETs. A comprehensive evaluation of performance and scalability of the proposed public key management regime is presented, which is compared to a certificate-based PKC by employing a number of quantified analyses and simulations. Not only does this paper demonstrate that the proposal can maintain security, but it also asserts that it can improve overall performance and scalability at a lower cost, compared to the certificate-based PKC. It is believed that the proposed scheme will add a new dimension to the key management and verification services for VANETs.
Resumo:
Within Australia, motor vehicle injury is the leading cause of hospital admissions and fatalities. Road crash data reveals that among the factors contributing to crashes in Queensland, speed and alcohol continue to be overrepresented. While alcohol is the number one contributing factor to fatal crashes, speeding also contributes to a high proportion of crashes. Research indicates that risky driving is an important contributor to road crashes. However, it has been debated whether all risky driving behaviours are similar enough to be explained by the same combination of factors. Further, road safety authorities have traditionally relied upon deterrence based countermeasures to reduce the incidence of illegal driving behaviours such as speeding and drink driving. However, more recent research has focussed on social factors to explain illegal driving behaviours. The purpose of this research was to examine and compare the psychological, legal, and social factors contributing to two illegal driving behaviours: exceeding the posted speed limit and driving when over the legal blood alcohol concentration (BAC) for the drivers licence type. Complementary theoretical perspectives were chosen to comprehensively examine these two behaviours including Akers’ social learning theory, Stafford and Warr’s expanded deterrence theory, and personality perspectives encompassing alcohol misuse, sensation seeking, and Type-A behaviour pattern. The program of research consisted of two phases: a preliminary pilot study, and the main quantitative phase. The preliminary pilot study was undertaken to inform the development of the quantitative study and to ensure the clarity of the theoretical constructs operationalised in this research. Semi-structured interviews were conducted with 11 Queensland drivers recruited from Queensland Transport Licensing Centres and Queensland University of Technology (QUT). These interviews demonstrated that the majority of participants had engaged in at least one of the behaviours, or knew of someone who had. It was also found among these drivers that the social environment in which both behaviours operated, including family and friends, and the social rewards and punishments associated with the behaviours, are important in their decision making. The main quantitative phase of the research involved a cross-sectional survey of 547 Queensland licensed drivers. The aim of this study was to determine the relationship between speeding and drink driving and whether there were any similarities or differences in the factors that contribute to a driver’s decision to engage in one or the other. A comparison of the participants self-reported speeding and self-reported drink driving behaviour demonstrated that there was a weak positive association between these two behaviours. Further, participants reported engaging in more frequent speeding at both low (i.e., up to 10 kilometres per hour) and high (i.e., 10 kilometres per hour or more) levels, than engaging in drink driving behaviour. It was noted that those who indicated they drove when they may be over the legal limit for their licence type, more frequently exceeded the posted speed limit by 10 kilometres per hour or more than those who complied with the regulatory limits for drink driving. A series of regression analyses were conducted to investigate the factors that predict self-reported speeding, self-reported drink driving, and the preparedness to engage in both behaviours. In relation to self-reported speeding (n = 465), it was found that among the sociodemographic and person-related factors, younger drivers and those who score high on measures of sensation seeking were more likely to report exceeding the posted speed limit. In addition, among the legal and psychosocial factors it was observed that direct exposure to punishment (i.e., being detected by police), direct punishment avoidance (i.e., engaging in an illegal driving behaviour and not being detected by police), personal definitions (i.e., personal orientation or attitudes toward the behaviour), both the normative and behavioural dimensions of differential association (i.e., refers to both the orientation or attitude of their friends and family, as well as the behaviour of these individuals), and anticipated punishments were significant predictors of self-reported speeding. It was interesting to note that associating with significant others who held unfavourable definitions towards speeding (the normative dimension of differential association) and anticipating punishments from others were both significant predictors of a reduction in self-reported speeding. In relation to self-reported drink driving (n = 462), a logistic regression analysis indicated that there were a number of significant predictors which increased the likelihood of whether participants had driven in the last six months when they thought they may have been over the legal alcohol limit. These included: experiences of direct punishment avoidance; having a family member convicted of drink driving; higher levels of Type-A behaviour pattern; greater alcohol misuse (as measured by the AUDIT); and the normative dimension of differential association (i.e., associating with others who held favourable attitudes to drink driving). A final logistic regression analysis examined the predictors of whether the participants reported engaging in both drink driving and speeding versus those who reported engaging in only speeding (the more common of the two behaviours) (n = 465). It was found that experiences of punishment avoidance for speeding decreased the likelihood of engaging in both speeding and drink driving; whereas in the case of drink driving, direct punishment avoidance increased the likelihood of engaging in both behaviours. It was also noted that holding favourable personal definitions toward speeding and drink driving, as well as higher levels of on Type-A behaviour pattern, and greater alcohol misuse significantly increased the likelihood of engaging in both speeding and drink driving. This research has demonstrated that the compliance with the regulatory limits was much higher for drink driving than it was for speeding. It is acknowledged that while speed limits are a fundamental component of speed management practices in Australia, the countermeasures applied to both speeding and drink driving do not appear to elicit the same level of compliance across the driving population. Further, the findings suggest that while the principles underpinning the current regime of deterrence based countermeasures are sound, current enforcement practices are insufficient to force compliance among the driving population, particularly in the case of speeding. Future research should further examine the degree of overlap between speeding and drink driving behaviour and whether punishment avoidance experiences for a specific illegal driving behaviour serve to undermine the deterrent effect of countermeasures aimed at reducing the incidence of another illegal driving behaviour. Furthermore, future work should seek to understand the factors which predict engaging in speeding and drink driving behaviours at the same time. Speeding has shown itself to be a pervasive and persistent behaviour, hence it would be useful to examine why road safety authorities have been successful in convincing the majority of drivers of the dangers of drink driving, but not those associated with speeding. In conclusion, the challenge for road safety practitioners will be to convince drivers that speeding and drink driving are equally risky behaviours, with the ultimate goal to reduce the prevalence of both behaviours.
Resumo:
Purpose: Investigations of foveal aberrations assume circular pupils. However, the pupil becomes increasingly elliptical with increase in visual field eccentricity. We address this and other issues concerning peripheral aberration specification. Methods: One approach uses an elliptical pupil similar to the actual pupil shape, stretched along its minor axis to become a circle so that Zernike circular aberration polynomials may be used. Another approach uses a circular pupil whose diameter matches either the larger or smaller dimension of the elliptical pupil. Pictorial presentation of aberrations, influence of wavelength on aberrations, sign differences between aberrations for fellow eyes, and referencing position to either the visual field or the retina are considered. Results: Examples show differences between the two approaches. Each has its advantages and disadvantages, but there are ways to compensate for most disadvantages. Two representations of data are pupil aberration maps at each position in the visual field and maps showing the variation in individual aberration coefficients across the field. Conclusions: Based on simplicity of use, adequacy of approximation, possible departures of off-axis pupils from ellipticity, and ease of understanding by clinicians, the circular pupil approach is preferable to the stretched elliptical approach for studies involving field angles up to 30 deg.
Resumo:
A key issue in the economic development and performance of organizations is the existence of standards. Their definition and control are sources of power and it is important to understand their concept, as it gives standards their direction and their legitimacy, and to explore how they are represented and applied. The difficulties posed by classical micro-economics in establishing a theory of standardization that is compatible with its fundamental axiomatic are acknowledged. We propose to reconsider the problem by taking the opposite perspective in questioning its theoretical base and by reformulating assumptions about the independent and autonomous decisions taken by actors. The Theory of Conventions will offer us a theoretical framework and tools enabling us to understand the systemic dimension and dynamic structure of standards. These will be seen as a special case of conventions. This work aims to provide a sound basis and promote a better consciousness in the development of global project management standards. It aims also to emphasize that social construction is not a matter of copyright but a matter of open minds, collective cognitive process and freedom for the common wealth.
Resumo:
A key issue for the economic development and for performance of organizations is the existence of standards. As their definitions and control are source of power, it seems to be important to understand the concept and to wonder about the representations authorized by the concept which give their direction and their legitimacy. The difficulties of classical microeconomics of establishing a theory of standardisation compatible with its fundamental axiomatic are underlined. We propose to reconsider the problem by carrying out the opposite way: to question the theoretical base, by reformulating assumptions on the autonomy of the choice of the actors. The theory of conventions will offer us both a theoretical framework and tools, enabling us to understand the systemic dimension and dynamic structure of standards seen as special case of conventions. This work aims thus to provide a sound basis and promote a better consciousness in the development of global project management standards, aiming also to underline that social construction is not a matter of copyright but a matter of open minds, collective cognitive process and freedom for the common wealth.
Resumo:
Focusing on the role within and between organizations of the project management discipline to design and implement strategy, as source of competitive advantage, leads us to question the scientific field behind this discipline. This science should be the basis for the development and use of bodies of knowledge, standards, certification programs, education, and competencies, and beyond this as a source of value for people, organizations, and society. Thus the importance to characterize, define, and understand this field and its underlying strength, basis, and development is paramount. For this purpose we propose to give some insights on the current situation. This will lead us to clarify our epistemological position and demonstrate that both constructivism and positivist approaches are required to seize the full dimension and dynamics of the field.We will referee to sociology of actor-networks and qualitative scientometrics leading to the choice of the co-word analysis method in enabling us to capture the project management field and its dynamics.Results of a study based on the analysis of ABI Inform database will be presented and some future trends and scenarios proposed.