547 resultados para least common subgraph algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach to derive requirements for an avionics architecture that provides onboard sense-and-avoid and autonomous emergency forced landing capabilities to a UAS. The approach is based on two design paradigms that (1) derive requirements analyzing the common functionality between these two functions to then derive requirements for sensors, computing capability, interfaces, etc. (2) consider the risk and safety mitigation associated with these functions to derive certification requirements for the system design. We propose to use the Aircraft Certification Matrix (ACM) approach to tailor the system Development Assurance Levels (DAL) and architecture requirements in accordance with acceptable risk criteria. This architecture is developed under the name “Flight Guardian”. Flight Guardian is an avionics architecture that integrates common sensory elements that are essential components of any UAS that is required to be dependable. The Flight Guardian concept is also applicable to conventionally piloted aircraft, where it will serve to reduce cockpit workload.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 2010 LAGI competition was held on three underutilized sites in the United Arab Emirates. By choosing Staten Island, New York in 2012 the competition organises have again brought into question new roles for public open space in the contemporary city. In the case of the UEA sites, the competition produced many entries which aimed to create a sculpture and by doing so, they attracted people to the selected empty spaces in an arid climate. In a way these proposals were the incubators and the new characters of these empty spaces. The competition was thus successful at advancing understandings of the expanded role of public open spaces in EAU and elsewhere. LAGI 2012 differs significantly to the UAE program because Fresh Kills Park has already been planned as a public open space for New Yorkers - with or without these clean energy sculptures. Furthermore, Fresh Kills Park is already an (gas) energy generating site in its own right. We believe Fresh Kills Park, as a site, presents a problem which somewhat transcends the aims of the competition brief. Advancing a sustainable urban design proposition for the site therefore requires a fundamental reconsideration of the established paradigms public open space. Hence our strategy is to not only create an energy generating, site specific art work, but to create synergy between the public and the site engagement while at the same time complement the idiosyncrasies of the pre-existing engineered landscape. Current PhD research about energy generation in public open spaces informs this work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software as a Service (SaaS) in Cloud is getting more and more significant among software users and providers recently. A SaaS that is delivered as composite application has many benefits including reduced delivery costs, flexible offers of the SaaS functions and decreased subscription cost for users. However, this approach has introduced a new problem in managing the resources allocated to the composite SaaS. The resource allocation that has been done at the initial stage may be overloaded or wasted due to the dynamic environment of a Cloud. A typical data center resource management usually triggers a placement reconfiguration for the SaaS in order to maintain its performance as well as to minimize the resource used. Existing approaches for this problem often ignore the underlying dependencies between SaaS components. In addition, the reconfiguration also has to comply with SaaS constraints in terms of its resource requirements, placement requirement as well as its SLA. To tackle the problem, this paper proposes a penalty-based Grouping Genetic Algorithm for multiple composite SaaS components clustering in Cloud. The main objective is to minimize the resource used by the SaaS by clustering its component without violating any constraint. Experimental results demonstrate the feasibility and the scalability of the proposed algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches to the virtual machine placement problem consider the energy consumption by physical machines in a data center only, but do not consider the energy consumption in communication network in the data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement in order to make the data center more energy-efficient. In this paper, we propose a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both the servers and the communication network in the data center. Experimental results show that the genetic algorithm performs well when tackling test problems of different kinds, and scales up well when the problem size increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the context of ambiguity resolution (AR) of Global Navigation Satellite Systems (GNSS), decorrelation among entries of an ambiguity vector, integer ambiguity search and ambiguity validations are three standard procedures for solving integer least-squares problems. This paper contributes to AR issues from three aspects. Firstly, the orthogonality defect is introduced as a new measure of the performance of ambiguity decorrelation methods, and compared with the decorrelation number and with the condition number which are currently used as the judging criterion to measure the correlation of ambiguity variance-covariance matrix. Numerically, the orthogonality defect demonstrates slightly better performance as a measure of the correlation between decorrelation impact and computational efficiency than the condition number measure. Secondly, the paper examines the relationship of the decorrelation number, the condition number, the orthogonality defect and the size of the ambiguity search space with the ambiguity search candidates and search nodes. The size of the ambiguity search space can be properly estimated if the ambiguity matrix is decorrelated well, which is shown to be a significant parameter in the ambiguity search progress. Thirdly, a new ambiguity resolution scheme is proposed to improve ambiguity search efficiency through the control of the size of the ambiguity search space. The new AR scheme combines the LAMBDA search and validation procedures together, which results in a much smaller size of the search space and higher computational efficiency while retaining the same AR validation outcomes. In fact, the new scheme can deal with the case there are only one candidate, while the existing search methods require at least two candidates. If there are more than one candidate, the new scheme turns to the usual ratio-test procedure. Experimental results indicate that this combined method can indeed improve ambiguity search efficiency for both the single constellation and dual constellations respectively, showing the potential for processing high dimension integer parameters in multi-GNSS environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The onsite treatment of sewage and effluent disposal is widely prevalent in rural and urban fringe areas due to the general unavailability of reticulated wastewater collection systems. Despite the low technology of the systems, failure is common and in many cases leading to adverse public health and environmental consequences. It is important therefore that careful consideration is given to the design and location of onsite sewage treatment systems. This requires an understanding of the factors that influence treatment performance. The use of subsurface absorption systems is the most common form of effluent disposal for onsite sewage treatment, particularly for septic tanks. Also, in the case of septic tanks, a subsurface disposal system is generally an integral component of the sewage treatment process. Site specific factors play a key role in the onsite treatment of sewage. The project The primary aims of the research project were: • to relate treatment performance of onsite sewage treatment systems to soil conditions at site; • to evaluate current research relating to onsite sewage treatment; and, • to identify key issues where currently there is a lack of relevant research. These tasks were undertaken with the objective of facilitating the development of performance based planning and management strategies for onsite sewage treatment. The primary focus of this research project has been on septic tanks. By implication, the investigation has been confined to subsurface soil absorption systems. The design and treatment processes taking place within the septic tank chamber itself did not form a part of the investigation. Five broad categories of soil types prevalent in the Brisbane region have been considered in this project. The number of systems investigated was based on the proportionate area of urban development within the Brisbane region located on each of the different soil types. In the initial phase of the investigation, the majority of the systems evaluated were septic tanks. However, a small number of aerobic wastewater treatment systems (AWTS) were also included. The primary aim was to compare the effluent quality of systems employing different generic treatment processes. It is important to note that the number of each different type of system investigated was relatively small. Consequently, this does not permit a statistical analysis to be undertaken of the results obtained for comparing different systems. This is an important issue considering the large number of soil physico-chemical parameters and landscape factors that can influence treatment performance and their wide variability. The report This report is the last in a series of three reports focussing on the performance evaluation of onsite treatment of sewage. The research project was initiated at the request of the Brisbane City Council. The project component discussed in the current report outlines the detailed soil investigations undertaken at a selected number of sites. In the initial field sampling, a number of soil chemical properties were assessed as indicators to investigate the extent of effluent flow and to help understand what soil factors renovate the applied effluent. The soil profile attributes, especially texture, structure and moisture regime were examined more in an engineering sense to determine the effect of movement of water into and through the soil. It is important to note that it is not only the physical characteristics, but also the chemical characteristics of the soil as well as landscape factors play a key role in the effluent renovation process. In order to understand the complex processes taking place in a subsurface effluent disposal area, influential parameters were identified using soil chemical concepts. Accordingly, the primary focus of this final phase of the research project was to identify linkages between various soil chemical parameters and landscape patterns and their contribution to the effluent renovation process. The research outcomes will contribute to the development of robust criteria for evaluating the performance of subsurface effluent disposal systems. The outcomes The key findings from the soil investigations undertaken are: • Effluent renovation is primarily undertaken by a combination of various soil physico-chemical parameters and landscape factors, thereby making the effluent renovation processes strongly site dependent. • Decisions regarding site suitability for effluent disposal should not be based purely in terms of the soil type. A number of other factors such as the site location in the catena, the drainage characteristics and other physical and chemical characteristics, also exert a strong influence on site suitability. • Sites, which are difficult to characterise in terms of suitability for effluent disposal, will require a detailed soil physical and chemical analysis to be undertaken to a minimum depth of at least 1.2 m. • The Ca:Mg ratio and Exchangeable Sodium Percentage are important parameters in soil suitability assessment. A Ca:Mg ratio of less than 0.5 would generally indicate a high ESP. This in turn would mean that Na and possibly Mg are the dominant exchangeable cations, leading to probable clay dispersion. • A Ca:Mg ratio greater than 0.5 would generally indicate a low ESP in the profile, which in turn indicates increased soil stability. • In higher clay percentage soils, low ESP can have a significant effect. • The presence of high exchangeable Na can be counteracted by the presence of swelling clays, and an exchange complex co-dominated by exchangeable Ca and exchangeable Mg. This aids absorption of cations at depth, thereby reducing the likelihood of dispersion. • Salt is continually added to the soil by the effluent and problems may arise if the added salts accumulate to a concentration that is harmful to the soil structure. Under such conditions, good drainage is essential in order to allow continuous movement of water and salt through the profile. Therefore, for a site to be sustainable, it would have a maximum application rate of effluent. This would be dependent on subsurface characteristics and the surface area available for effluent disposal. • The dosing regime for effluent disposal can play a significant role in the prevention of salt accumulation in the case of poorly draining sites. Though intermittent dosing was not considered satisfactory for the removal of the clogging mat layer, it has positive attributes in the context of removal of accumulated salts in the soil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D models of long bones are being utilised for a number of fields including orthopaedic implant design. Accurate reconstruction of 3D models is of utmost importance to design accurate implants to allow achieving a good alignment between two bone fragments. Thus for this purpose, CT scanners are employed to acquire accurate bone data exposing an individual to a high amount of ionising radiation. Magnetic resonance imaging (MRI) has been shown to be a potential alternative to computed tomography (CT) for scanning of volunteers for 3D reconstruction of long bones, essentially avoiding the high radiation dose from CT. In MRI imaging of long bones, the artefacts due to random movements of the skeletal system create challenges for researchers as they generate inaccuracies in the 3D models generated by using data sets containing such artefacts. One of the defects that have been observed during an initial study is the lateral shift artefact occurring in the reconstructed 3D models. This artefact is believed to result from volunteers moving the leg during two successive scanning stages (the lower limb has to be scanned in at least five stages due to the limited scanning length of the scanner). As this artefact creates inaccuracies in the implants designed using these models, it needs to be corrected before the application of 3D models to implant design. Therefore, this study aimed to correct the lateral shift artefact using 3D modelling techniques. The femora of five ovine hind limbs were scanned with a 3T MRI scanner using a 3D vibe based protocol. The scanning was conducted in two halves, while maintaining a good overlap between them. A lateral shift was generated by moving the limb several millimetres between two scanning stages. The 3D models were reconstructed using a multi threshold segmentation method. The correction of the artefact was achieved by aligning the two halves using the robust iterative closest point (ICP) algorithm, with the help of the overlapping region between the two. The models with the corrected artefact were compared with the reference model generated by CT scanning of the same sample. The results indicate that the correction of the artefact was achieved with an average deviation of 0.32 ± 0.02 mm between the corrected model and the reference model. In comparison, the model obtained from a single MRI scan generated an average error of 0.25 ± 0.02 mm when compared with the reference model. An average deviation of 0.34 ± 0.04 mm was seen when the models generated after the table was moved were compared to the reference models; thus, the movement of the table is also a contributing factor to the motion artefacts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proving security of cryptographic schemes, which normally are short algorithms, has been known to be time-consuming and easy to get wrong. Using computers to analyse their security can help to solve the problem. This thesis focuses on methods of using computers to verify security of such schemes in cryptographic models. The contributions of this thesis to automated security proofs of cryptographic schemes can be divided into two groups: indirect and direct techniques. Regarding indirect ones, we propose a technique to verify the security of public-key-based key exchange protocols. Security of such protocols has been able to be proved automatically using an existing tool, but in a noncryptographic model. We show that under some conditions, security in that non-cryptographic model implies security in a common cryptographic one, the Bellare-Rogaway model [11]. The implication enables one to use that existing tool, which was designed to work with a different type of model, in order to achieve security proofs of public-key-based key exchange protocols in a cryptographic model. For direct techniques, we have two contributions. The first is a tool to verify Diffie-Hellmanbased key exchange protocols. In that work, we design a simple programming language for specifying Diffie-Hellman-based key exchange algorithms. The language has a semantics based on a cryptographic model, the Bellare-Rogaway model [11]. From the semantics, we build a Hoare-style logic which allows us to reason about the security of a key exchange algorithm, specified as a pair of initiator and responder programs. The other contribution to the direct technique line is on automated proofs for computational indistinguishability. Unlike the two other contributions, this one does not treat a fixed class of protocols. We construct a generic formalism which allows one to model the security problem of a variety of classes of cryptographic schemes as the indistinguishability between two pieces of information. We also design and implement an algorithm for solving indistinguishability problems. Compared to the two other works, this one covers significantly more types of schemes, but consequently, it can verify only weaker forms of security.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: To identify risk factors for major Adverse Events (AEs) and to develop a nomogram to predict the probability of such AEs in individual patients who have surgery for apparent early stage endometrial cancer. Methods: We used data from 753 patients who were randomized to either total laparoscopic hysterectomy or total abdominal hysterectomy in the LACE trial. Serious adverse events that prolonged hospital stay or postoperative adverse events (using common terminology criteria 3+, CTCAE V3) were considered major AEs. We analyzed pre-surgical characteristics that were associated with the risk of developing major AEs by multivariate logistic regression. We identified a parsimonious model by backward stepwise logistic regression. The six most significant or clinically important variables were included in the nomogram to predict the risk of major AEs within 6 weeks of surgery and the nomogram was internally validated. Results: Overall, 132 (17.5%) patients had at least one major AE. An open surgical approach (laparotomy), higher Charlson’s medical co-morbidities score, moderately differentiated tumours on curettings, higher baseline ECOG score, higher body mass index and low haemoglobin levels were associated with AE and were used in the nomogram. The bootstrap corrected concordance index of the nomogram was 0.63 and it showed good calibration. Conclusions: Six pre-surgical factors independently predicted the risk of major AEs. This research might form the basis to develop risk reduction strategies to minimize the risk of AEs among patients undergoing surgery for apparent early stage endometrial cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM: To compare Total Laparoscopic Hysterectomy (TLH) and Total Abdominal Hysterectomy (TAH) with regard to surgical safety. METHODS: Between October 2005 and June 2010, 760 patients with apparent early stage endometrial cancer were enroled in a multicentre, randomised clinical trial (LACE) comparing outcomes following TLH or TAH. The main study end points for this analysis were surgical adverse events (AE), hospital length of stay, conversion from laparoscopy to laparotomy, including 753 patients who completed at least 6 weeks of follow-up. Postoperative AEs were graded according to Common Toxicity Criteria (V3), and those immediately life-threatening, requiring inpatient hospitalisation or prolonged hospitalisation, or resulting in persistent or significant disability/incapacity were regarded as serious AEs. RESULTS: The incidence of intra-operative AEs was comparable in either group. The incidence of post-operative AE CTC grade 3+ (18.6% in TAH, 12.9% in TLH, p 0.03) and serious AE (14.3% in TAH, 8.2% in TLH, p 0.007) was significantly higher in the TAH group compared to the TLH group. Mean operating time was 132 and 107 min, and median length of hospital stay was 2 and 5 days in the TLH and TAH group, respectively (p<0.0001). The decline of haemoglobin from baseline to day 1 postoperatively was 2g/L less in the TLH group (p 0.006). CONCLUSIONS: Compared to TAH, TLH is associated with a significantly decreased risk of major surgical AEs. A laparoscopic surgical approach to early stage endometrial cancer is safe.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fast calculation of quantities such as in-cylinder volume and indicated power is important in internal combustion engine research. Multiple channels of data including crank angle and pressure were collected for this purpose using a fully instrumented diesel engine research facility. Currently, existing methods use software to post-process the data, first calculating volume from crank angle, then calculating the indicated work and indicated power from the area enclosed by the pressure-volume indicator diagram. Instead, this work investigates the feasibility of achieving real-time calculation of volume and power via hardware implementation on Field Programmable Gate Arrays (FPGAs). Alternative hardware implementations were investigated using lookup tables, Taylor series methods or the CORDIC (CoOrdinate Rotation DIgital Computer) algorithm to compute the trigonometric operations in the crank angle to volume calculation, and the CORDIC algorithm was found to use the least amount of resources. Simulation of the hardware based implementation showed that the error in the volume and indicated power is less than 0.1%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints,including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing a significant proportion of invalid matches. The accuracy of matching in the vicinity of edges is also improved.