921 resultados para COMPLEXITY


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The historical challenge of environmental impact assessment (EIA) has been to predict project-based impacts accurately. Both EIA legislation and the practice of EIA have evolved over the last three decades in Canada, and the development of the discipline and science of environmental assessment has improved how we apply environmental assessment to complex projects. The practice of environmental assessment integrates the social and natural sciences and relies on an eclectic knowledge base from a wide range of sources. EIA methods and tools provide a means to structure and integrate knowledge in order to evaluate and predict environmental impacts.----- This Chapter will provide a brief overview of how impacts are identified and predicted. How do we determine what aspect of the natural and social environment will be affected when a mine is excavated? How does the practitioner determine the range of potential impacts, assess whether they are significant, and predict the consequences? There are no standard answers to these questions, but there are established methods to provide a foundation for scoping and predicting the potential impacts of a project.----- Of course, the community and publics play an important role in this process, and this will be discussed in subsequent chapters. In the first part of this chapter, we will deal with impact identification, which involves appplying scoping to critical issues and determining impact significance, baseline ecosystem evaluation techniques, and how to communicate environmental impacts. In the second part of the chapter, we discuss the prediction of impacts in relation to the complexity of the environment, ecological risk assessment, and modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an efficient low-complexity clipping noise compensation scheme for PAR reduced orthogonal frequency division multiple access (OFDMA) systems. Conventional clipping noise compensation schemes proposed for OFDM systems are decision directed schemes which use demodulated data symbols. Thus these schemes fail to deliver expected performance in OFDMA systems where multiple users share a single OFDM symbol and a specific user may only know his/her own modulation scheme. The proposed clipping noise estimation and compensation scheme does not require the knowledge of the demodulated symbols of the other users, making it very promising for OFDMA systems. It uses the equalized output and the reserved tones to reconstruct the signal by compensating the clipping noise. Simulation results show that the proposed scheme can significantly improve the system performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

John Frazer's architectural work is inspired by living and generative processes. Both evolutionary and revolutionary, it explores informatin ecologies and the dynamics of the spaces between objects. Fuelled by an interest in the cybernetic work of Gordon Pask and Norbert Wiener, and the possibilities of the computer and the "new science" it has facilitated, Frazer and his team of collaborators have conducted a series of experiments that utilize genetic algorithms, cellular automata, emergent behaviour, complexity and feedback loops to create a truly dynamic architecture. Frazer studied at the Architectural Association (AA) in London from 1963 to 1969, and later became unit master of Diploma Unit 11 there. He was subsequently Director of Computer-Aided Design at the University of Ulter - a post he held while writing An Evolutionary Architecture in 1995 - and a lecturer at the University of Cambridge. In 1983 he co-founded Autographics Software Ltd, which pioneered microprocessor graphics. Frazer was awarded a person chair at the University of Ulster in 1984. In Frazer's hands, architecture becomes machine-readable, formally open-ended and responsive. His work as computer consultant to Cedric Price's Generator Project of 1976 (see P84)led to the development of a series of tools and processes; these have resulted in projects such as the Calbuild Kit (1985) and the Universal Constructor (1990). These subsequent computer-orientated architectural machines are makers of architectural form beyond the full control of the architect-programmer. Frazer makes much reference to the multi-celled relationships found in nature, and their ongoing morphosis in response to continually changing contextual criteria. He defines the elements that describe his evolutionary architectural model thus: "A genetic code script, rules for the development of the code, mapping of the code to a virtual model, the nature of the environment for the development of the model and, most importantly, the criteria for selection. In setting out these parameters for designing evolutionary architectures, Frazer goes beyond the usual notions of architectural beauty and aesthetics. Nevertheless his work is not without an aesthetic: some pieces are a frenzy of mad wire, while others have a modularity that is reminiscent of biological form. Algorithms form the basis of Frazer's designs. These algorithms determine a variety of formal results dependent on the nature of the information they are given. His work, therefore, is always dynamic, always evolving and always different. Designing with algorithms is also critical to other architects featured in this book, such as Marcos Novak (see p150). Frazer has made an unparalleled contribution to defining architectural possibilities for the twenty-first century, and remains an inspiration to architects seeking to create responsive environments. Architects were initially slow to pick up on the opportunities that the computer provides. These opportunities are both representational and spatial: computers can help architects draw buildings and, more importantly, they can help architects create varied spaces, both virtual and actual. Frazer's work was groundbreaking in this respect, and well before its time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Developing an effective impact evaluation framework, managing and conducting rigorous impact evaluations, and developing a strong research and evaluation culture within development communication organisations presents many challenges. This is especially so when both the community and organisational context is continually changing and the outcomes of programs are complex and difficult to clearly identify.----- This paper presents a case study from a research project being conducted from 2007-2010 that aims to address these challenges and issues, entitled Assessing Communication for Social Change: A New Agenda in Impact Assessment. Building on previous development communication projects which used ethnographic action research, this project is developing, trailing and rigorously evaluating a participatory impact assessment methodology for assessing the social change impacts of community radio programs in Nepal. This project is a collaboration between Equal Access – Nepal (EAN), Equal Access – International, local stakeholders and listeners, a network of trained community researchers, and a research team from two Australian universities. A key element of the project is the establishment of an organisational culture within EAN that values and supports the impact assessment process being developed, which is based on continuous action learning and improvement. The paper describes the situation related to monitoring and evaluation (M&E) and impact assessment before the project began, in which EAN was often reliant on time-bound studies and ‘success stories’ derived from listener letters and feedback. We then outline the various strategies used in an effort to develop stronger and more effective impact assessment and M&E systems, and the gradual changes that have occurred to date. These changes include a greater understanding of the value of adopting a participatory, holistic, evidence-based approach to impact assessment. We also critically review the many challenges experienced in this process, including:----- • Tension between the pressure from donors to ‘prove’ impacts and the adoption of a bottom-up, participatory approach based on ‘improving’ programs in ways that meet community needs and aspirations.----- • Resistance from the content teams to changing their existing M&E practices and to the perceived complexity of the approach.----- • Lack of meaningful connection between the M&E and content teams.----- • Human resource problems and lack of capacity in analysing qualitative data and reporting results.----- • The contextual challenges, including extreme poverty, wide cultural and linguistic diversity, poor transport and communications infrastructure, and political instability.----- • A general lack of acceptance of the importance of evaluation within Nepal due to accepting everything as fate or ‘natural’ rather than requiring investigation into a problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The importance of broadening community participation in environmental decision-making is widely recognized and lack of participation in this process appears to be a perennial problem. In this context, there have been calls from some academics for the more extensive use of geographic information systems (GIS) and distance learning technologies, accessible via the Internet, as a possible means to inform and empower communities. However, a number of problems exist. For instance, at present the scope for online interaction between policy-makers and citizens is currently limited. Contemporary web-based environmental information systems suffer from this lack of interactivity on the one hand and on the other hand from the apparent complexity for the lay user. This paper explores the issue of online community participation at the local level and attempts to construct a framework for a new (and potentially more effective) model of online participatory decision-making. The key components, system architecture and stages of such a model are introduced. This model, referred to as a ‘Community Based Interactive Environmental Decision Support System’, incorporates advanced information technologies, distance learning and community involvement tools which will be applied and evaluated in the field through a pilot project in Tokyo in the summer of 2002.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Health Information Systems (HIS) make extensive use of Information and Communication Technologies (ICT). The use of ICT aids in improving the quality and efficiency of healthcare services by making healthcare information available at the point of care (Goldstein, Groen, Ponkshe, and Wine, 2007). The increasing availability of healthcare data presents security and privacy issues which have not yet been fully addressed (Liu, Caelli, May, and Croll, 2008a). Healthcare organisations have to comply with the security and privacy requirements stated in laws, regulations and ethical standards, while managing healthcare information. Protecting the security and privacy of healthcare information is a very complex task (Liu, May, Caelli and Croll, 2008b). In order to simplify the complexity of providing security and privacy in HIS, appropriate information security services and mechanisms have to be implemented. Solutions at the application layer have already been implemented in HIS such as those existing in healthcare web services (Weaver et al., 2003). In addition, Discretionary Access Control (DAC) is the most commonly implemented access control model to restrict access to resources at the OS layer (Liu, Caelli, May, Croll and Henricksen, 2007a). Nevertheless, the combination of application security mechanisms and DAC at the OS layer has been stated to be insufficient in satisfying security requirements in computer systems (Loscocco et al., 1998). This thesis investigates the feasibility of implementing Security Enhanced Linux (SELinux) to enforce a Role-Based Access Control (RBAC) policy to help protect resources at the Operating System (OS) layer. SELinux provides Mandatory Access Control (MAC) mechanisms at the OS layer. These mechanisms can contain the damage from compromised applications and restrict access to resources according to the security policy implemented. The main contribution of this research is to provide a modern framework to implement and manage SELinux in HIS. The proposed framework introduces SELinux Profiles to restrict access permissions over the system resources to authorised users. The feasibility of using SELinux profiles in HIS was demonstrated through the creation of a prototype, which was submitted to various attack scenarios. The prototype was also subjected to testing during emergency scenarios, where changes to the security policies had to be made on the spot. Attack scenarios were based on vulnerabilities common at the application layer. SELinux demonstrated that it could effectively contain attacks at the application layer and provide adequate flexibility during emergency situations. However, even with the use of current tools, the development of SELinux policies can be very complex. Further research has to be made in order to simplify the management of SELinux policies and access permissions. In addition, SELinux related technologies, such as the Policy Management Server by Tresys Technologies, need to be researched in order to provide solutions at different layers of protection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information and communication technology (ICT) curriculum integration is the apparent goal of an extensive array of educational initiatives in all Australian states and territories. However, ICT curriculum integration is neither value neutral nor universally understood. The literature indicates the complexity of rationales and terminology that underwrite various initiatives; various dimensions and stages of integration; inherent methodological difficulties; obstacles to integration; and significant issues relating to teacher professional development and ICT competencies (Jamieson-Proctor, Watson, & Finger, 2003). This paper investigates the overarching question: Are ICT integration initiatives making a significant impact on teaching and learning in Queensland state schools? It reports the results from a teacher survey that measures the quantity and quality of student use of ICT. Results from 929 teachers across all year levels and from 38 Queensland state schools indicate that female teachers (73% of the full time teachers in Queensland state schools in 2005) are significantly less confident than their male counterparts in using ICT with students for teaching and learning, and there is evidence of significant resistance to using ICT to align curriculum with new times and new technologies. This result supports the hypothesis that current initiatives with ICT are having uneven and less than the desired results system wide. These results require further urgent investigation in order to address the factors that currently constrain the use of ICT for teaching and learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Principal Topic In this paper we seek to highlight the important intermediate role that the gestation process plays in entrepreneurship by examining its key antecedents and its consequences for new venture emergence. In doing so we take a behavioural perspective and argue that it is not only what a nascent venture is, but what it does (Katz & Gartner, 1988; Shane & Delmar, 2004; Reynolds, 2007) and when it does it during start-up (Reynolds & Miller, 1992; Lichtenstein, Carter, Dooley & Gartner, 2007) that is important. To extend an analogy from biological development, what we suggest is that the way a new venture is nurtured is just as fundamental as its nature. Much prior research has focused on the nature of new ventures and attempted to attribute variations in outcomes directly to the impact resource endowments and investments have. While there is little doubt that venture resource attributes such as human capital, and specifically prior entrepreneurial experience (Alsos & Kolvereid, 1998), access to social (Davidsson & Honig, 2003) and financial capital have an influence. Resource attributes themselves are distal from successful start-up endeavours and remain inanimate if not for the actions of the nascent venture. The key contribution we make is to shift focus from whether or not actions are taken, but when these actions happen and how that is situated in the overall gestation process. Thus, we suggest that it is gestation process dynamics, or when gestation actions occur, that is more proximal to venture outcomes and we focus on this. Recently scholars have highlighted the complexity that exists in the start-up or gestation process, be it temporal or contextual (Liao, Welsch & Tan, 2005; Lichtenstein et al. 2007). There is great variation in how long a start-up process might take (Reynolds & Miller, 1992), some processes require less action than others (Carter, Gartner & Reynolds, 1996), and the overall intensity of the start-up effort is also deemed important (Reynolds, 2007). And, despite some evidence that particular activities are more influential than others (Delmar & Shane, 2003), the order in which events may happen is, until now, largely indeterminate as regard its influence on success (Liao & Welsch, 2008). We suggest that it is this complexity of the intervening gestation process that attenuates the effect of resource endowment and has resulted in mixed findings in previous research. Thus, in order to reduce complexity we shall take a holistic view of the gestation process and argue that it is its’ dynamic properties that determine nascent venture attempt outcomes. Importantly, we acknowledge that particular gestation processes of themselves would not guarantee successful start-up, but it is more correctly the fit between the process dynamics and the ventures attributes (Davidsson, 2005) that is influential. So we aim to examine process dynamics by comparing sub-groups of venture types by resource attributes. Thus, as an initial step toward unpacking the complexity of the gestation process, this paper aims to establish the importance of its role as an intermediary between attributes of the nascent venture and the emergence of that venture. Here, we make a contribution by empirically examining gestation process dynamics and their fit with venture attributes. We do this by firstly, examining that nature of the influence that venture attributes such as human and social capital have on the dynamics of the gestation process, and secondly by investigating the effect that gestation process dynamics have on venture creation outcomes. Methodology and Propositions In order to explore the importance that gestation processes dynamics have in nascent entrepreneurship we conduct an empirical study of ventures start-ups. Data is drawn from a screened random sample of 625 Australian nascent business ventures prior to them achieving consistent outcomes in the market. This data was collected during 2007/8 and 2008/9 as part of the Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE) project (Davidsson et al., 2008). CAUSEE is a longitudinal panel study conducted over four years, sourcing information from annually administered telephone surveys. Importantly for our study, this methodology allows for the capture and tracking of active nascent venture creation as it happens, thus reducing hindsight and selection biases. In addition, improved tests of causality may be made given that outcome measures are temporally removed from preceding events. The data analysed in this paper represents the first two of these four years, and for the first time has access to follow-up outcome measures for these venture attempts: where 260 were successful, 126 were abandoned, and 191 are still in progress. With regards to venture attributes as gestation process antecedents, we examine specific human capital measured as successful prior experience in entrepreneurship, and direct social capital of the venture as ‘team start-ups’. In assessing gestation process dynamics we follow Lichtenstein et al. (2007) to suggest that the rate, concentration and timing of gestation activities may be used to summarise the complexity dynamics of that process. In addition, we extend this set of measures to include the interaction of discovery and exploitation by way of changes made to the venture idea. Those ventures with successful prior experience or those who conduct symbiotic parallel start-up attempts may be able to, or be forced to, leave their gestation action until later and still derive a successful outcome. In addition access to direct social capital may provide the support upon which the venture may draw in order to persevere in the face of adversity, turning a seemingly futile start-up attempt into a success. On the other hand prior experience may engender the foresight to terminate a venture attempt early should it be seen to be going nowhere. The temporal nature of these conjectures highlight the importance that process dynamics play and will be examined in this research Statistical models are developed to examine gestation process dynamics. We use multivariate general linear modelling to analyse how human and social capital factors influence gestation process dynamics. In turn, we use event history models and stratified Cox regression to assess the influence that gestation process dynamics have on venture outcomes. Results and Implications What entrepreneurs do is of interest to both scholars and practitioners’ alike. Thus the results of this research are important since they focus on nascent behaviour and its outcomes. While venture attributes themselves may be influential this is of little actionable assistance to practitioners. For example it is unhelpful to say to the prospective first time entrepreneur “you’ll be more successful if you have lots of prior experience in firm start-ups”. This research attempts to close this relevance gap by addressing what gestation behaviours might be appropriate, when actions best be focused, and most importantly in what circumstances. Further, we make a contribution to the entrepreneurship literature, examining the role that gestation process dynamics play in outcomes, by specifically attributing these to the nature of the venture itself. This extension is to the best of our knowledge new to the research field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: In public health, as well as other health education contexts, there is increasing recognition of the transformation in public health practice and the necessity for educational providers to keep pace. Traditionally, public health education has been at the postgraduate level; however, over the past decade an upsurge in the growth of undergraduate public health degrees has taken place. Discussion: This article explores the impact of these changes on the traditional sphere of Master of Public Health programs, the range of competencies required at undergraduate and postgraduate levels, and the relevance of these changes to the public health workforce. It raises questions about the complexity of educational issues facing tertiary institutions and discusses the implications of these issues on undergraduate and postgraduate programs in public health. Conclusion: The planning and provisioning of education in public health must differentiate between the requirements of undergraduate and postgraduate students – while also addressing the changing needs of the health workforce. Within Australia, although significant research has been undertaken regarding the competencies required by postgraduate public health students, the approach is still somewhat piecemeal, and does not address undergraduate public health. This paper argues for a consistent approach to competencies that describe and differentiate entry-level and advanced practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce K-tree in an information retrieval context. It is an efficient approximation of the k-means clustering algorithm. Unlike k-means it forms a hierarchy of clusters. It has been extended to address issues with sparse representations. We compare performance and quality to CLUTO using document collections. The K-tree has a low time complexity that is suitable for large document collections. This tree structure allows for efficient disk based implementations where space requirements exceed that of main memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction : For the past decade, three dimensional (3D) culture has served as a foundation for regenerative medicine study. With an increasing awareness of the importance of cell-cell and cell-extracellular matrix interactions which are lacking in 2D culture system, 3D culture system has been employed for many other applications namely cancer research. Through development of various biomaterials and utilization of tissue engineering technology, many in vivo physiological responses are now better understood. The cellular and molecular communication of cancer cells and their microenvironment, for instance can be studied in vitro in 3D culture system without relying on animal models alone. Predilection of prostate cancer (CaP) to bone remains obscure due to the complexity of the mechanisms and lack of proper model for the studies. In this study, we aim to investigate the interaction between CaP cells and osteoblasts simulating the natural bone metastasis. We also further investigate the invasiveness of CaP cells and response of androgen sensitve CaP cells, LNCaP to synthetic androgen.----- Method : Human osteoblast (hOB) scaffolds were prepared by seeding hOB on medical grade polycaprolactone-tricalcium phosphate (mPLC-TCP) scaffolds and induced to produce bone matrix. CaP cell lines namely wild type PC3 (PC3-N), overexpressed prostate specific antigen PC3 (PC3k3s5) and LNCaP were seeded on hOB scaffolds as co-cultures. Morphology of cells was examined by Phalloidin-DAPI and SEM imaging. Gelatin zymography was performed on the 48 hours conditioned media (CM) from co-cultures to determine matrix metalloproteinase (MMP) activity. Gene expression of hOB/LNCaP co-cultures which were treated for 48 hours with 1nM synthetic androgen R1881 were analysed by quantitative real time PCR (qRT-PCR).----- Results : Co-culture of PCC/hOB revealed that the morphology of PCCs on the tissue engineered bone matrix varied from homogenous to heterogenous clusters. Enzymatically inactive pro-MMP2 was detected in CM from hOBs and PCCs cultured on scaffolds. Elevation in MMP9 activity was found only in hOB/PC3N co-culture. hOB/LNCaP co-culture showed increase in expression of key enzymes associated with steroid production which also corresponded to an increase in prostate specific antigen (PSA) and MMP9.----- Conclusions : Upregulation of MMP9 indicates involvement of ECM degradation during cancer invasion and bone metastases. Expression of enzymes involved in CaP progression, PSA, which is not expressed in osteoblasts, demonstrates that crosstalk between PCCs and osteoblasts may play a part in the aggressiveness of CaP. The presence of steroidogenic enzymes, particularly, RDH5, in osteoblasts and stimulated expression in co-culture, may indicate osteoblast production of potent androgens, fuelling cancer cell proliferation. Based on these results, this practical 3D culture system may provide greater understanding into CaP mediated bone metastasis. This allows the role of the CaP/hOB interaction with regards to invasive property and steroidogenesis to be further explored.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“Hardware in the Loop” (HIL) testing is widely used in the automotive industry. The sophisticated electronic control units used for vehicle control are usually tested and evaluated using HIL-simulations. The HIL increases the degree of realistic testing of any system. Moreover, it helps in designing the structure and control of the system under test so that it works effectively in the situations that will be encountered in the system. Due to the size and the complexity of interaction within a power network, most research is based on pure simulation. To validate the performance of physical generator or protection system, most testing is constrained to very simple power network. This research, however, examines a method to test power system hardware within a complex virtual environment using the concept of the HIL. The HIL testing for electronic control units and power systems protection device can be easily performed at signal level. But performance of power systems equipments, such as distributed generation systems can not be evaluated at signal level using HIL testing. The HIL testing for power systems equipments is termed here as ‘Power Network in the Loop’ (PNIL). PNIL testing can only be performed at power level and requires a power amplifier that can amplify the simulation signal to the power level. A power network is divided in two parts. One part represents the Power Network Under Test (PNUT) and the other part represents the rest of the complex network. The complex network is simulated in real time simulator (RTS) while the PNUT is connected to the Voltage Source Converter (VSC) based power amplifier. Two way interaction between the simulator and amplifier is performed using analog to digital (A/D) and digital to analog (D/A) converters. The power amplifier amplifies the current or voltage signal of simulator to the power level and establishes the power level interaction between RTS and PNUT. In the first part of this thesis, design and control of a VSC based power amplifier that can amplify a broadband voltage signal is presented. A new Hybrid Discontinuous Control method is proposed for the amplifier. This amplifier can be used for several power systems applications. In the first part of the thesis, use of this amplifier in DSTATCOM and UPS applications are presented. In the later part of this thesis the solution of network in the loop testing with the help of this amplifier is reported. The experimental setup for PNIL testing is built in the laboratory of Queensland University of Technology and the feasibility of PNIL testing has been evaluated using the experimental studies. In the last section of this thesis a universal load with power regenerative capability is designed. This universal load is used to test the DG system using PNIL concepts. This thesis is composed of published/submitted papers that form the chapters in this dissertation. Each paper has been published or submitted during the period of candidature. Chapter 1 integrates all the papers to provide a coherent view of wide bandwidth switching amplifier and its used in different power systems applications specially for the solution of power systems testing using PNIL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction Many bilinguals will have had the experience of unintentionally reading something in a language other than the intended one (e.g. MUG to mean mosquito in Dutch rather than a receptacle for a hot drink, as one of the possible intended English meanings), of finding themselves blocked on a word for which many alternatives suggest themselves (but, somewhat annoyingly, not in the right language), of their accent changing when stressed or tired and, occasionally, of starting to speak in a language that is not understood by those around them. These instances where lexical access appears compromised and control over language behavior is reduced hint at the intricate structure of the bilingual lexical architecture and the complexity of the processes by which knowledge is accessed and retrieved. While bilinguals might tend to blame word finding and other language problems on their bilinguality, these difficulties per se are not unique to the bilingual population. However, what is unique, and yet far more common than is appreciated by monolinguals, is the cognitive architecture that subserves bilingual language processing. With bilingualism (and multilingualism) the rule rather than the exception (Grosjean, 1982), this architecture may well be the default structure of the language processing system. As such, it is critical that we understand more fully not only how the processing of more than one language is subserved by the brain, but also how this understanding furthers our knowledge of the cognitive architecture that encapsulates the bilingual mental lexicon. The neurolinguistic approach to bilingualism focuses on determining the manner in which the two (or more) languages are stored in the brain and how they are differentially (or similarly) processed. The underlying assumption is that the acquisition of more than one language requires at the very least a change to or expansion of the existing lexicon, if not the formation of language-specific components, and this is likely to manifest in some way at the physiological level. There are many sources of information, ranging from data on bilingual aphasic patients (Paradis, 1977, 1985, 1997) to lateralization (Vaid, 1983; see Hull & Vaid, 2006, for a review), recordings of event-related potentials (ERPs) (e.g. Ardal et al., 1990; Phillips et al., 2006), and positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies of neurologically intact bilinguals (see Indefrey, 2006; Vaid & Hull, 2002, for reviews). Following the consideration of methodological issues and interpretative limitations that characterize these approaches, the chapter focuses on how the application of these approaches has furthered our understanding of (1) selectivity of bilingual lexical access, (2) distinctions between word types in the bilingual lexicon and (3) control processes that enable language selection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important aspect of designing any product is validation. Virtual design process (VDP) is an alternative to hardware prototyping in which analysis of designs can be done without manufacturing physical samples. In recent years, VDP have been generated either for animation or filming applications. This paper proposes a virtual reality design process model on one of the applications when used as a validation tool. This technique is used to generate a complete design guideline and validation tool of product design. To support the design process of a product, a virtual environment and VDP method were developed that supports validation and an initial design cycle performed by a designer. The product model car carrier is used as illustration for which virtual design was generated. The loading and unloading sequence of the model for the prototype was generated using automated reasoning techniques and was completed by interactively animating the product in the virtual environment before complete design was built. By using the VDP process critical issues like loading, unloading, Australian Design rules (ADR) and clearance analysis were done. The process would save time, money in physical sampling and to large extent in complete math generation. Since only schematic models are required, it saves time in math modelling and handling of bigger size assemblies due to complexity of the models. This extension of VDP process for design evaluation is unique and was developed, implemented successfully. In this paper a Toll logistics and J Smith and Sons car carrier which is developed under author’s responsibility has been used to illustrate our approach of generating design validation via VDP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.