342 resultados para multiple-input multiple-out
Resumo:
Our aim is to develop a set of leading performance indicators to enable managers of large projects to forecast during project execution how various stakeholders will perceive success months or even years into the operation of the output. Large projects have many stakeholders who have different objectives for the project, its output, and the business objectives they will deliver. The output of a large project may have a lifetime that lasts for years, or even decades, and ultimate impacts that go beyond its immediate operation. How different stakeholders perceive success can change with time, and so the project manager needs leading performance indicators that go beyond the traditional triple constraint to forecast how key stakeholders will perceive success months or even years later. In this article, we develop a model for project success that identifies how project stakeholders might perceive success in the months and years following a project. We identify success or failure factors that will facilitate or mitigate against achievement of those success criteria, and a set of potential leading performance indicators that forecast how stakeholders will perceive success during the life of the project's output. We conducted a scale development study with 152 managers of large projects and identified two project success factor scales and seven stakeholder satisfaction scales that can be used by project managers to predict stakeholder satisfaction on projects and so may be used by the managers of large projects for the basis of project control.
Resumo:
In this paper, we highlight the existence of multi-founder firms, which were founded by multiple individuals (with no family connections) who are still actively involved in the firm as directors and/or managers. These firms provide a unique setting to shed further light on the net valuation effects of founder involvement. In particular, multi-founder firms provide us with the opportunity to examine the benefits and costs to shareholders of multiple founders involved as directors, CEOs and managers in the same firm. Our analysis indicates that multi-founder firms are more valuable than all other types of firms, including single-founder firms and family firms, with the valuation premium positively related to the number of founders involved in the firm. Further analysis confirms that this valuation premium is linked to the direct involvement of the multiple founders as directors and CEOs. However, further founder involvement in vice president positions has a negative relationship with firm value.
Resumo:
Software as a Service (SaaS) in Cloud is getting more and more significant among software users and providers recently. A SaaS that is delivered as composite application has many benefits including reduced delivery costs, flexible offers of the SaaS functions and decreased subscription cost for users. However, this approach has introduced a new problem in managing the resources allocated to the composite SaaS. The resource allocation that has been done at the initial stage may be overloaded or wasted due to the dynamic environment of a Cloud. A typical data center resource management usually triggers a placement reconfiguration for the SaaS in order to maintain its performance as well as to minimize the resource used. Existing approaches for this problem often ignore the underlying dependencies between SaaS components. In addition, the reconfiguration also has to comply with SaaS constraints in terms of its resource requirements, placement requirement as well as its SLA. To tackle the problem, this paper proposes a penalty-based Grouping Genetic Algorithm for multiple composite SaaS components clustering in Cloud. The main objective is to minimize the resource used by the SaaS by clustering its component without violating any constraint. Experimental results demonstrate the feasibility and the scalability of the proposed algorithm.
Resumo:
Based on the eigen crack opening displacement (COD) boundary integral equations, a newly developed computational approach is proposed for the analysis of multiple crack problems. The eigen COD particularly refers to a crack in an infinite domain under fictitious traction acting on the crack surface. With the concept of eigen COD, the multiple cracks in great number can be solved by using the conventional displacement discontinuity boundary integral equations in an iterative fashion with a small size of system matrix. The interactions among cracks are dealt with by two parts according to the distances of cracks to the current crack. The strong effects of cracks in adjacent group are treated with the aid of the local Eshelby matrix derived from the traction BIEs in discrete form. While the relatively week effects of cracks in far-field group are treated in the iteration procedures. Numerical examples are provided for the stress intensity factors of multiple cracks, up to several thousands in number, with the proposed approach. By comparing with the analytical solutions in the literature as well as solutions of the dual boundary integral equations, the effectiveness and the efficiencies of the proposed approach are verified.
Resumo:
Online social networks connect millions of people around the globe. These electronic bonds make individuals comfortable with their behaviours. Such positive signs of sharing information is useful phenomena requires consideration to establish a socio-scientific effect. Recently, many web users have more than one social networking account. This means a user may hold multiple profiles which are stored in different Social Network Sites (SNNs). Maintaining these multiple online social network profiles is cumbersome and time-consuming [1]. In this paper we will propose a framework for the management of a user's multiple profiles. A demonstrator, called Multiple Profile Manager (MPM), will be showcased to illustrate how effective the framework will be.
Resumo:
Multiple choice (MC) examinations are frequently used for the summative assessment of large classes because of their ease of marking and their perceived objectivity. However, traditional MC formats usually lead to a surface approach to learning, and do not allow students to demonstrate the depth of their knowledge or understanding. For these reasons, we have trialled the incorporation of short answer (SA) questions into the final examination of two first year chemistry units, alongside MC questions. Students’ overall marks were expected to improve, because they were able to obtain partial marks for the SA questions. Although large differences in some individual students’ performance in the two sections of their examinations were observed, most students received a similar percentage mark for their MC as for their SA sections and the overall mean scores were unchanged. In-depth analysis of all responses to a specific question, which was used previously as a MC question and in a subsequent semester in SA format, indicates that the SA format can have weaknesses due to marking inconsistencies that are absent for MC questions. However, inclusion of SA questions improved student scores on the MC section in one examination, indicating that their inclusion may lead to different study habits and deeper learning. We conclude that questions asked in SA format must be carefully chosen in order to optimise the use of marking resources, both financial and human, and questions asked in MC format should be very carefully checked by people trained in writing MC questions. These results, in conjunction with an analysis of the different examination formats used in first year chemistry units, have shaped a recommendation on how to reliably and cost-effectively assess first year chemistry, while encouraging higher order learning outcomes.
Resumo:
Process modelling – the design and use of graphical documentations of an organisation’s business processes – is a key method to document and use information about business processes in organisational projects. Still, despite current interest in process modelling, this area of study still faces essential challenges. One of the key unanswered questions concerns the impact of process modelling in organisational practice. Process modelling initiatives call for tangible results in the form of returns on the substantial investments that organisations undertake to achieve improved processes. This study explores the impact of process model use on end-users and its contribution to organisational success. We posit that the use of conceptual models creates impact in organisational process teams. We also report on a set of case studies in which we explore tentative evidence for the development of impact of process model use. The results of this work provide a better understanding of process modelling impact from information practices and also lead to insights into how organisations should conduct process modelling initiatives in order to achieve an optimum return on their investment.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.
Resumo:
Background: Evaluation of scapular posture is a fundamental component in the clinical evaluation of the upper quadrant. This study examined the intrarater reliability of scapular posture ratings. Methods: A test-retest reliability investigation was undertaken with one week between assessment sessions. At each session physical therapists conducted visual assessments of scapula posture (relative to the thorax) in five different scapula postural planes (plane of scapula, sagittal plane, transverse plane, horizontal plane, and vertical plane). These five plane ratings were performed for four different scapular posture perturbating conditions (rest, isometric shoulder; flexion, abduction, and external rotation). Results. A total of 100 complete scapular posture ratings (50 left, 50 right) were undertaken at each assessment. The observed agreement between the test and retest postural plane ratings ranged from 59% to 87%; 16 of the 20 plane-condition combinations exceeded 75% observed agreement. Kappa (and prevalence adjusted bias adjusted kappa) values were inconsistent across the postural planes and perturbating conditions. Conclusions: This investigation generally revealed fair to moderate intrarater reliability in the rating of scapular posture by visual inspection. However, enough disagreement between assessments was present to warrant caution when interpreting perceived changes in scapula position between longitudinal assessments using visual inspection alone.
Resumo:
Associations between single nucleotide polymorphisms (SNPs) at 5p15 and multiple cancer types have been reported. We have previously shown evidence for a strong association between prostate cancer (PrCa) risk and rs2242652 at 5p15, intronic in the telomerase reverse transcriptase (TERT) gene that encodes TERT. To comprehensively evaluate the association between genetic variation across this region and PrCa, we performed a fine-mapping analysis by genotyping 134 SNPs using a custom Illumina iSelect array or Sequenom MassArray iPlex, followed by imputation of 1094 SNPs in 22 301 PrCa cases and 22 320 controls in The PRACTICAL consortium. Multiple stepwise logistic regression analysis identified four signals in the promoter or intronic regions of TERT that independently associated with PrCa risk. Gene expression analysis of normal prostate tissue showed evidence that SNPs within one of these regions also associated with TERT expression, providing a potential mechanism for predisposition to disease.
Speaker attribution of multiple telephone conversations using a complete-linkage clustering approach
Resumo:
In this paper we propose and evaluate a speaker attribution system using a complete-linkage clustering method. Speaker attribution refers to the annotation of a collection of spoken audio based on speaker identities. This can be achieved using diarization and speaker linking. The main challenge associated with attribution is achieving computational efficiency when dealing with large audio archives. Traditional agglomerative clustering methods with model merging and retraining are not feasible for this purpose. This has motivated the use of linkage clustering methods without retraining. We first propose a diarization system using complete-linkage clustering and show that it outperforms traditional agglomerative and single-linkage clustering based diarization systems with a relative improvement of 40% and 68%, respectively. We then propose a complete-linkage speaker linking system to achieve attribution and demonstrate a 26% relative improvement in attribution error rate (AER) over the single-linkage speaker linking approach.
Resumo:
This paper proposes a framework to analyse performance on multiple choice questions with the focus on linguistic factors. Item Response Theory (IRT) is deployed to estimate ability and question difficulty levels. A logistic regression model is used to detect Differential Item Functioning questions. Probit models testify relationships between performance and linguistic factors controlling the effects of question construction and students’ background. Empirical results have important implications. The lexical density of stems affects performance. The use of non-Economics specialised vocabulary has differing impacts on the performance of students with different language backgrounds. The IRT-based ability and difficulty help explain performance variations.
Resumo:
A newly developed computational approach is proposed in the paper for the analysis of multiple crack problems based on the eigen crack opening displacement (COD) boundary integral equations. The eigen COD particularly refers to a crack in an infinite domain under fictitious traction acting on the crack surface. With the concept of eigen COD, the multiple cracks in great number can be solved by using the conventional displacement discontinuity boundary integral equations in an iterative fashion with a small size of system matrix to determine all the unknown CODs step by step. To deal with the interactions among cracks for multiple crack problems, all cracks in the problem are divided into two groups, namely the adjacent group and the far-field group, according to the distance to the current crack in consideration. The adjacent group contains cracks with relatively small distances but strong effects to the current crack, while the others, the cracks of far-field group are composed of those with relatively large distances. Correspondingly, the eigen COD of the current crack is computed in two parts. The first part is computed by using the fictitious tractions of adjacent cracks via the local Eshelby matrix derived from the traction boundary integral equations in discretized form, while the second part is computed by using those of far-field cracks so that the high computational efficiency can be achieved in the proposed approach. The numerical results of the proposed approach are compared not only with those using the dual boundary integral equations (D-BIE) and the BIE with numerical Green's functions (NGF) but also with those of the analytical solutions in literature. The effectiveness and the efficiency of the proposed approach is verified. Numerical examples are provided for the stress intensity factors of cracks, up to several thousands in number, in both the finite and infinite plates.