935 resultados para Measuring method
Resumo:
The LiteSteel Beam (LSB) is an innovative cold-formed steel hollow flange section. When used as floor joists, the LSB sections require holes in the web to provide access for various services. In this study a detailed investigation was undertaken into the elastic lateral distortional buckling behaviour of LSBs with circular web openings subjected to a uniform moment using finite element analysis. Validated ideal finite element models were used first to study the effect of web holes on their elastic lateral distortional buckling behaviour. An equivalent web thickness method was then proposed using four different equations for the elastic buckling analyses of LSBs with web holes. It was found that two of them could be successfully used with approximate numerical models based on solid web elements with an equivalent reduced thickness to predict the elastic lateral distortional buckling moments.
Resumo:
At first glance the built environments of South Florida and South East Queensland appear very similar, particularly along the highly urbanized coast. However this apparent similarity belies some fundamental differences between the two regions in terms of context and the approach to regulating development. This paper describes some of these key differences, but focuses on two research questions: 1) do these differences affect the built environment; and 2) if so, how does the built form differ? There has been considerable research on how to best measure urban form, particularly as it relates to measuring urban sprawl (Schwarz 2010; Clifton et al. 2008). Some of the key questions identified by this research include: what are the best variables to use?; what scale should be used?; and what time period to use? We will assimilate this research in order to develop a methodology for measuring urban form and apply it to both case study regions. There are several potential outcomes from this research -- one is that the built form between the two regions is quite different; and the second is that it is similar. The first outcome is what might be expected given the differences in context and development regulation. However how might the second outcome be explained – major differences in context and development regulation resulting in minor differences in key measures of urban form? One explanation is that differences in the way development is regulated are not as important in determining the built form as are private market forces.
Resumo:
The paper introduces the underlying principles and the general features of a meta-method (MAP method) developed as part of and used in various research, education and professional development programmes at ESC Lille. This method aims at providing effective and efficient structure and process for acting and learning in various complex, uncertain and ambiguous managerial situations (projects, programmes, portfolios). The paper is developed around three main parts. First, I suggest revisiting the dominant vision of the project management knowledge field, based on the assumptions they are not addressing adequately current business and management contexts and situations, and that competencies in management of entrepreneurial activities are the sources of creation of value for organisations. Then, grounded on the former developments, I introduce the underlying concepts supporting MAP method seen as a ‘convention generator’ and how this meta method inextricably links learning and practice in addressing managerial situations. Finally, I briefly describe an example of application, illustrating with a case study how the method integrates Project Management Governance, and give few examples of use in Management Education and Professional Development.
Resumo:
The paper introduces the underlying principles and the general features of a meta-method (MAP method – Management & Analysis of Projects) developed as part of and used in various research, education and professional development programmes at ESC Lille. This method aims at providing effective and efficient structure and process for acting and learning in various complex, uncertain and ambiguous managerial situations (projects, programmes, portfolios). The paper is organized in three parts. In a first part, I propose to revisit the dominant vision of the project management knowledge field, based on the assumptions they are not addressing adequately current business and management contexts and situations, and that competencies in management of entrepreneurial activities are the sources of creation of value for organisations. Then, grounded on the new suggested perspective, the second part presents the underlying concepts supporting MAP method seen as a ‘convention generator' and how this meta-method inextricably links learning and practice in addressing managerial situations. The third part describes example of application, illustrating with a brief case study how the method integrates Project Management Governance, and gives few examples of use in Management Education and Professional Development.
Resumo:
Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost which demand of continuous improvements techniques. In this paper, we propose a fuzzy based performance evaluation method for lean supply chain. To understand the overall performance of cost competitive supply chain, we investigate the alignment of market strategy and position of the supply chain. Competitive strategies can be achieved by using a different weight calculation for different supply chain situations. By identifying optimal performance metrics and applying performance evaluation methods, managers can predict the overall supply chain performance under lean strategy.
Resumo:
Volume measurements are useful in many branches of science and medicine. They are usually accomplished by acquiring a sequence of cross sectional images through the object using an appropriate scanning modality, for example x-ray computed tomography (CT), magnetic resonance (MR) or ultrasound (US). In the cases of CT and MR, a dividing cubes algorithm can be used to describe the surface as a triangle mesh. However, such algorithms are not suitable for US data, especially when the image sequence is multiplanar (as it usually is). This problem may be overcome by manually tracing regions of interest (ROIs) on the registered multiplanar images and connecting the points into a triangular mesh. In this paper we describe and evaluate a new discreet form of Gauss’ theorem which enables the calculation of the volume of any enclosed surface described by a triangular mesh. The volume is calculated by summing the vector product of the centroid, area and normal of each surface triangle. The algorithm was tested on computer-generated objects, US-scanned balloons, livers and kidneys and CT-scanned clay rocks. The results, expressed as the mean percentage difference ± one standard deviation were 1.2 ± 2.3, 5.5 ± 4.7, 3.0 ± 3.2 and −1.2 ± 3.2% for balloons, livers, kidneys and rocks respectively. The results compare favourably with other volume estimation methods such as planimetry and tetrahedral decomposition.
Resumo:
Monodisperse silica nanoparticles were synthesised by the well-known Stober protocol, then dispersed in acetonitrile (ACN) and subsequently added to a bisacetonitrile gold(I) coordination complex ([Au(MeCN)2]?) in ACN. The silica hydroxyl groups were deprotonated in the presence of ACN, generating a formal negative charge on the siloxy groups. This allowed the [Au(MeCN)2]? complex to undergo ligand exchange with the silica nanoparticles and form a surface coordination complex with reduction to metallic gold (Au0) proceeding by an inner sphere mechanism. The residual [Au(MeCN)2]? complex was allowed to react with water, disproportionating into Au0 and Au(III), respectively, with the Au0 adding to the reduced gold already bound on the silica surface. The so-formed metallic gold seed surface was found to be suitable for the conventional reduction of Au(III) to Au0 by ascorbic acid (ASC). This process generated a thin and uniform gold coating on the silica nanoparticles. The silica NPs batches synthesised were in a size range from 45 to 460 nm. Of these silica NP batches, the size range from 400 to 480 nm were used for the gold-coating experiments.
Resumo:
This note examines the productive efficiency of 62 starting guards during the 2011/12 National Basketball Association (NBA) season. This period coincides with the phenomenal and largely unanticipated performance of New York Knicks’ starting point guard Jeremy Lin and the attendant public and media hype known as Linsanity. We employ a data envelopment analysis (DEA) approach that includes allowance for an undesirable output, here turnovers per game, with the desirable outputs of points, rebounds, assists, steals, and blocks per game and an input of minutes per game. The results indicate that depending upon the specification, between 29 and 42 percent of NBA guards are fully efficient, including Jeremy Lin, with a mean inefficiency of 3.7 and 19.2 percent. However, while Jeremy Lin is technically efficient, he seldom serves as a benchmark for inefficient players, at least when compared with established players such as Chris Paul and Dwayne Wade. This suggests the uniqueness of Jeremy Lin’s productive solution and may explain why his unique style of play, encompassing individual brilliance, unselfish play, and team leadership, is of such broad public appeal.
Resumo:
In phylogenetics, the unrooted model of phylogeny and the strict molecular clock model are two extremes of a continuum. Despite their dominance in phylogenetic inference, it is evident that both are biologically unrealistic and that the real evolutionary process lies between these two extremes. Fortunately, intermediate models employing relaxed molecular clocks have been described. These models open the gate to a new field of “relaxed phylogenetics.” Here we introduce a new approach to performing relaxed phylogenetic analysis. We describe how it can be used to estimate phylogenies and divergence times in the face of uncertainty in evolutionary rates and calibration times. Our approach also provides a means for measuring the clocklikeness of datasets and comparing this measure between different genes and phylogenies. We find no significant rate autocorrelation among branches in three large datasets, suggesting that autocorrelated models are not necessarily suitable for these data. In addition, we place these datasets on the continuum of clocklikeness between a strict molecular clock and the alternative unrooted extreme. Finally, we present analyses of 102 bacterial, 106 yeast, 61 plant, 99 metazoan, and 500 primate alignments. From these we conclude that our method is phylogenetically more accurate and precise than the traditional unrooted model while adding the ability to infer a timescale to evolution.
Resumo:
A standard method for the numerical solution of partial differential equations (PDEs) is the method of lines. In this approach the PDE is discretised in space using �finite di�fferences or similar techniques, and the resulting semidiscrete problem in time is integrated using an initial value problem solver. A significant challenge when applying the method of lines to fractional PDEs is that the non-local nature of the fractional derivatives results in a discretised system where each equation involves contributions from many (possibly every) spatial node(s). This has important consequences for the effi�ciency of the numerical solver. First, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. Second, since the Jacobian matrix of the system is dense (partially or fully), methods that avoid the need to form and factorise this matrix are preferred. In this paper, we consider a nonlinear two-sided space-fractional di�ffusion equation in one spatial dimension. A key contribution of this paper is to demonstrate how an eff�ective preconditioner is crucial for improving the effi�ciency of the method of lines for solving this equation. In particular, we show how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.
Resumo:
Enterprise architecture management (EAM) has become an intensively discussed approach to manage enterprise transformations. While many organizations employ EAM, a notable insecurity about the value of EAM remains. In this paper, we propose a model to measure the realization of benefits from EAM. We identify EAM success factors and EAM benefits through a comprehensive literature review and eleven explorative expert interviews. Based on our findings, we integrate the EAM success factors and benefits with the established DeLone & McLean IS success model resulting in a model that explains the realization of EAM benefits. This model aids organizations as a benchmark and framework for identifying and assessing the setup of their EAM initiatives and whether and how EAM benefits are materialized. We see our model also as a first step to gain insights in and start a discussion on the theory of EAM benefit realization.
Resumo:
Work integrated learning (WIL) or professional practice units are recognised as providing learning experiences that help students make successful transitions to professional practice. These units require students to engage in learning in the workplace; to reflect on this learning; and to integrate it with learning at university. However, an analysis of a recent cohort of property economics students at a large urban university provides evidence that there is great variation in work based learning experiences undertaken and that this impacts on students’capacity to respond to assessment tasks which involve critiquing these experiences in the form of reflective reports. This paper highlights the need to recognise the diversity of work based experiences; the impact this has on learning outcomes; and to find more effective and equitable ways of measuring these outcomes. The paper briefly discusses assessing learning outcomes in WIL and then describes the model of WIL in the Faculty of Built Environment and Engineering at the Queensland University of Technology (QUT). The paper elaborates on the diversity of students’ experiences and backgrounds including variations in the length of work experience, placement opportunities and conditions of employment.For example, the analysis shows that students with limited work experience often have difficulty critiquing this work experience and producing high level reflective reports. On the other hand students with extensive, discipline relevant work experience can be frustrated by assessment requirements that do not take their experience into account. Added to this the Global Financial Crisis (GFC) has restricted both part time and full time placement opportunities for some students. These factors affect students’ capacity to a) secure a relevant work experience, b) reflect critically on the work experiences and c) appreciate the impact the overall experience can have on their learning outcomes and future professional opportunities. Our investigation highlights some of the challenges faced in implementing effective and equitable approaches across diverse student cohorts. We suggest that increased flexibility in assessment requirements and increased feedback from industry may help address these challenges.
Resumo:
We examine methodologies and methods that apply to multi-level research in the learning sciences. In so doing we describe how multiple theoretical frameworks informs the use of different methods that apply to social levels involving space-time relationships that are not accessible consciously as social life is enacted. Most of the methods involve analyses of video and audio files. Within a framework of interpretive research we present a methodology of event-oriented social science, which employs video ethnography, narrative, conversation analysis, prosody analysis, and facial expression analysis. We illustrate multi-method research in an examination of the role of emotions in teaching and learning. Conversation and prosody analyses augment facial expression analysis and ethnography. We conclude with an exploration of ways in which multi-level studies can be complemented with neural level analyses.
Resumo:
This letter presents a technique to assess the overall network performance of sampled value process buses based on IEC 61850-9-2 using measurements from a single location in the network. The method is based upon the use of Ethernet cards with externally synchronized time stamping, and characteristics of the process bus protocol. The application and utility of the method is demonstrated by measuring latency introduced by Ethernet switches. Network latency can be measured from a single set of captures, rather than comparing source and destination captures. Absolute latency measures will greatly assist the design testing, commissioning and maintenance of these critical data networks.