23 resultados para Lyra Minima


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The low- and high-frequency components of a rustling sound, created when prey (freshly killed frog) was jerkily pulled on dry and wet sandy floors and asbestos, were recorded and played back to individual Indian false vampire bats (Megaderma lyra). Megaderma lyra responded with flight toward the speakers and captured dead frogs, that were kept as reward. The spectral peaks were at 8.6, 7.1 and 6.8 kHz for the low-frequency components of the sounds created at the dry, asbestos and wet floors, respectively. The spectral peaks for the high-frequency sounds created on the respective floors were at 36.8,27.2 and 23.3 kHz. The sound from the dry floor was more intense than that of from the other two substrata. Prey movements that generated sonic or ultrasonic sounds were both sufficient and necessary for the bats to detect and capture prey. The number of successful prey captures was significantly greater for the dry floor sound, especially to its high-frequency components. Bat-responses were low to the wet floor and moderate to the asbestos floor sounds. The bats did not respond to the sound of unrecorded parts of the tape. Even though the bats flew toward the speakers when the prey generated sounds were played back and captured the dead frogs we cannot rule out the possibility of M. lyra using echolocation to localize prey. However, the study indicates that prey that move on dry sandy floor are more vulnerable to predation by M. lyra.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the placement and sizing of Distributed Generators (DG) in distribution networks are determined optimally. The objective is to minimize the loss and to improve the reliability. The constraints are the bus voltage, feeder current and the reactive power flowing back to the source side. The placement and size of DGs are optimized using a combination of Discrete Particle Swarm Optimization (DPSO) and Genetic Algorithm (GA). This increases the diversity of the optimizing variables in DPSO not to be stuck in the local minima. To evaluate the proposed algorithm, the semi-urban 37-bus distribution system connected at bus 2 of the Roy Billinton Test System (RBTS), which is located at the secondary side of a 33/11 kV distribution substation, is used. The results finally illustrate the efficiency of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper employs empirical evidence from a survey of Queensland secondary school students to examine their knowledge about their wages and working conditions. It does so within the theoretical lens of the Gagne (or Gagne-Briggs) theory of instruction, which centres on the content of learning and how learning is acquired (Gagne, Briggs & Wager, 1988). While Gagne articulates five categories of learning, our focus here is on two; verbal information or declarative knowledge (facts that people can declare), and procedural knowledge (the rules and procedures for achieving outcomes). We show that student workers know little about the instruments governing their employment, or their workplace entitlements. Of the total sample of year 9 and year 11 students surveyed (n=892), those students who worked, or who had worked in the past year (n=438), were asked to identify whether they were employed under an award, collective agreement or AWA. Eighty three per cent of students did not know which industrial instrument set their wages. We argue that if young workers do not have declarative knowledge of their entitlements, nor basic procedural knowledge about redress, then they are not in a position to deploy Gagne’s ‘cognitive strategies’ that would enable them to take action to ensure their working conditions meet legal minima. We advocate that young workers should be given summary information on their wages and other entitlements on appointment and that such summary information should be readily available on employers’ noticeboards and electronically on company websites, and that the information should include a brief summary of avenues for redressing issues of underpayment or sub-standard conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over recent years, Unmanned Air Vehicles or UAVs have become a powerful tool for reconnaissance and surveillance tasks. These vehicles are now available in a broad size and capability range and are intended to fly in regions where the presence of onboard human pilots is either too risky or unnecessary. This paper describes the formulation and application of a design framework that supports the complex task of multidisciplinary design optimisation of UAVs systems via evolutionary computation. The framework includes a Graphical User Interface (GUI), a robust Evolutionary Algorithm optimiser named HAPEA, several design modules, mesh generators and post-processing capabilities in an integrated platform. These population –based algorithms such as EAs are good for cases problems where the search space can be multi-modal, non-convex or discontinuous, with multiple local minima and with noise, and also problems where we look for multiple solutions via Game Theory, namely a Nash equilibrium point or a Pareto set of non-dominated solutions. The application of the methodology is illustrated on conceptual and detailed multi-criteria and multidisciplinary shape design problems. Results indicate the practicality and robustness of the framework to find optimal shapes and trade—offs between the disciplinary analyses and to produce a set of non dominated solutions of an optimal Pareto front to the designer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel modified theory based upon Rayleigh scattering of ultrasound from composite nanoparticles with a liquid core and solid shell. We derive closed form solutions to the scattering cross-section and have applied this model to an ultrasound contrast agent consisting of a liquid-filled core (perfluorooctyl bromide, PFOB) encapsulated by a polymer shell (poly-caprolactone, PCL). Sensitivity analysis was performed to predict the dependence of the scattering cross-section upon material and dimensional parameters. A rapid increase in the scattering cross-section was achieved by increasing the compressibility of the core, validating the incorporation of high compressibility PFOB; the compressibility of the shell had little impact on the overall scattering cross-section although a more compressible shell is desirable. Changes in the density of the shell and the core result in predicted local minima in the scattering cross-section, approximately corresponding to the PFOB-PCL contrast agent considered; hence, incorporation of a lower shell density could potentially significantly improve the scattering cross-section. A 50% reduction in shell thickness relative to external radius increased the predicted scattering cross-section by 50%. Although it has often been considered that the shell has a negative effect on the echogeneity due to its low compressibility, we have shown that it can potentially play an important role in the echogeneity of the contrast agent. The challenge for the future is to identify suitable shell and core materials that meet the predicted characteristics in order to achieve optimal echogenity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a comprehensive planning methodology is proposed that can minimize the line loss, maximize the reliability and improve the voltage profile in a distribution network. The injected active and reactive power of Distributed Generators (DG) and the installed capacitor sizes at different buses and for different load levels are optimally controlled. The tap setting of HV/MV transformer along with the line and transformer upgrading is also included in the objective function. A hybrid optimization method, called Hybrid Discrete Particle Swarm Optimization (HDPSO), is introduced to solve this nonlinear and discrete optimization problem. The proposed HDPSO approach is a developed version of DPSO in which the diversity of the optimizing variables is increased using the genetic algorithm operators to avoid trapping in local minima. The objective function is composed of the investment cost of DGs, capacitors, distribution lines and HV/MV transformer, the line loss, and the reliability. All of these elements are converted into genuine dollars. Given this, a single-objective optimization method is sufficient. The bus voltage and the line current as constraints are satisfied during the optimization procedure. The IEEE 18-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate the unavoidable need for optimal control on the DG active and reactive power and capacitors in distribution networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Determination of the placement and rating of transformers and feeders are the main objective of the basic distribution network planning. The bus voltage and the feeder current are two constraints which should be maintained within their standard range. The distribution network planning is hardened when the planning area is located far from the sources of power generation and the infrastructure. This is mainly as a consequence of the voltage drop, line loss and system reliability. Long distance to supply loads causes a significant amount of voltage drop across the distribution lines. Capacitors and Voltage Regulators (VRs) can be installed to decrease the voltage drop. This long distance also increases the probability of occurrence of a failure. This high probability leads the network reliability to be low. Cross-Connections (CC) and Distributed Generators (DGs) are devices which can be employed for improving system reliability. Another main factor which should be considered in planning of distribution networks (in both rural and urban areas) is load growth. For supporting this factor, transformers and feeders are conventionally upgraded which applies a large cost. Installation of DGs and capacitors in a distribution network can alleviate this issue while the other benefits are gained. In this research, a comprehensive planning is presented for the distribution networks. Since the distribution network is composed of low and medium voltage networks, both are included in this procedure. However, the main focus of this research is on the medium voltage network planning. The main objective is to minimize the investment cost, the line loss, and the reliability indices for a study timeframe and to support load growth. The investment cost is related to the distribution network elements such as the transformers, feeders, capacitors, VRs, CCs, and DGs. The voltage drop and the feeder current as the constraints are maintained within their standard range. In addition to minimizing the reliability and line loss costs, the planned network should support a continual growth of loads, which is an essential concern in planning distribution networks. In this thesis, a novel segmentation-based strategy is proposed for including this factor. Using this strategy, the computation time is significantly reduced compared with the exhaustive search method as the accuracy is still acceptable. In addition to being applicable for considering the load growth, this strategy is appropriate for inclusion of practical load characteristic (dynamic), as demonstrated in this thesis. The allocation and sizing problem has a discrete nature with several local minima. This highlights the importance of selecting a proper optimization method. Modified discrete particle swarm optimization as a heuristic method is introduced in this research to solve this complex planning problem. Discrete nonlinear programming and genetic algorithm as an analytical and a heuristic method respectively are also applied to this problem to evaluate the proposed optimization method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Local image feature extractors that select local maxima of the determinant of Hessian function have been shown to perform well and are widely used. This paper introduces the negative local minima of the determinant of Hessian function for local feature extraction. The properties and scale-space behaviour of these features are examined and found to be desirable for feature extraction. It is shown how this new feature type can be implemented along with the existing local maxima approach at negligible extra processing cost. Applications to affine covariant feature extraction and sub-pixel precise corner extraction are demonstrated. Experimental results indicate that the new corner detector is more robust to image blur and noise than existing methods. It is also accurate for a broader range of corner geometries. An affine covariant feature extractor is implemented by combining the minima of the determinant of Hessian with existing scale and shape adaptation methods. This extractor can be implemented along side the existing Hessian maxima extractor simply by finding both minima and maxima during the initial extraction stage. The minima features increase the number of correspondences by two to four fold. The additional minima features are very distinct from the maxima features in descriptor space and do not make the matching process more ambiguous.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The measurement of broadband ultrasonic attenuation (BUA) in cancellous bone for the assessment of osteoporosis follows a parabolic-type dependence with bone volume fraction; having minima values corresponding to both entire bone and entire marrow. Langton has recently proposed that the primary BUA mechanism may be significant phase interference due to variations in propagation transit time through the test sample as detected over the phase-sensitive surface of the receive ultrasound transducer. This fundamentally simple concept assumes that the propagation of ultrasound through a complex solid : liquid composite sample such as cancellous bone may be considered by an array of parallel ‘sonic rays’. The transit time of each ray is defined by the proportion of bone and marrow propagated, being a minimum (tmin) solely through bone and a maximum (tmax) solely through marrow. A Transit Time Spectrum (TTS), ranging from tmin to tmax, may be defined describing the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit time over the surface of the receive ultrasound transducer. Phase interference may result from interaction of ‘sonic rays’ of differing transit times. The aim of this study was to test the hypothesis that there is a dependence of phase interference upon the lateral inhomogenity of transit time by comparing experimental measurements and computer simulation predictions of ultrasound propagation through a range of relatively simplistic solid:liquid models exhibiting a range of lateral inhomogeneities. Methods: A range of test models was manufactured using acrylic and water as surrogates for bone and marrow respectively. The models varied in thickness in one dimension normal to the direction of propagation, hence exhibiting a range of transit time lateral inhomogeneities, ranging from minimal (single transit time) to maximal (wedge; ultimately the limiting case where each sonic ray has a unique transit time). For the experimental component of the study, two unfocused 1 MHz ¾” broadband diameter transducers were utilized in transmission mode; ultrasound signals were recorded for each of the models. The computer simulation was performed with Matlab, where the transit time and relative amplitude of each sonic ray was calculated. The transit time for each sonic ray was defined as the sum of transit times through acrylic and water components. The relative amplitude considered the reception area for each sonic ray along with absorption in the acrylic. To replicate phase-sensitive detection, all sonic rays were summed and the output signal plotted in comparison with the experimentally derived output signal. Results: From qualtitative and quantitative comparison of the experimental and computer simulation results, there is an extremely high degree of agreement of 94.2% to 99.0% between the two approaches, supporting the concept that propagation of an ultrasound wave, for the models considered, may be approximated by a parallel sonic ray model where the transit time of each ray is defined by the proportion of ‘bone’ and ‘marrow’. Conclusions: This combined experimental and computer simulation study has successfully demonstrated that lateral inhomogeneity of transit time has significant potential for phase interference to occur if a phase-sensitive ultrasound receive transducer is implemented as in most commercial ultrasound bone analysis devices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To enhance the performance of the k-nearest neighbors approach in forecasting short-term traffic volume, this paper proposed and tested a two-step approach with the ability of forecasting multiple steps. In selecting k-nearest neighbors, a time constraint window is introduced, and then local minima of the distances between the state vectors are ranked to avoid overlappings among candidates. Moreover, to control extreme values’ undesirable impact, a novel algorithm with attractive analytical features is developed based on the principle component. The enhanced KNN method has been evaluated using the field data, and our comparison analysis shows that it outperformed the competing algorithms in most cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-means algorithm is one of the most popular techniques in clustering. Nevertheless, the performance of the K-means algorithm depends highly on initial cluster centers and converges to local minima. This paper proposes a hybrid evolutionary programming based clustering algorithm, called PSO-SA, by combining particle swarm optimization (PSO) and simulated annealing (SA). The basic idea is to search around the global solution by SA and to increase the information exchange among particles using a mutation operator to escape local optima. Three datasets, Iris, Wisconsin Breast Cancer, and Ripley’s Glass, have been considered to show the effectiveness of the proposed clustering algorithm in providing optimal clusters. The simulation results show that the PSO-SA clustering algorithm not only has a better response but also converges more quickly than the K-means, PSO, and SA algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a novel optimum path planning strategy for long duration AUV operations in environments with time-varying ocean currents. These currents can exceed the maximum achievable speed of the AUV, as well as temporally expose obstacles. In contrast to most other path planning strategies, paths have to be defined in time as well as space. The solution described here exploits ocean currents to achieve mission goals with minimal energy expenditure, or a tradeoff between mission time and required energy. The proposed algorithm uses a parallel swarm search as a means to reduce the susceptibility to large local minima on the complex cost surface. The performance of the optimisation algorithms is evaluated in simulation and experimentally with the Starbug AUV using a validated ocean model of Brisbane’s Moreton Bay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neutral and cationic \[C-2,P-2] were investigated by a combination of mass spectrometry and electronic structure calculations. The cationic \[C-2,P-2](.+) potential energy surface including all relevant minima, transition states and fragmentation products was calculated at the B3LYP/6-311G(3df) level of theory. The most stable structures are linear PCCP.+ 1(.+) (E-rel=0 kcal mol(-1)), a three-membered ring with exocyclic phosphorus c-(PCC)-P 2(.+) (E-rel = 40.8 kcal mol(-1)), and the rhombic isomer 3(.+) (E-rel = 24.9 kcal mol(-1)). All fragmentation channels are significantly higher in energy than any of the \[C-2,P-2](.+) isomers. Experimentally, \[C-2,P-2](.+) ions are generated under high vacuum conditions by electron ionization of two different precursors. The fragmentation of \[C-2,P-2](.+) on collisional activation is preceded by rearrangement reactions which obscure the structural connectivity of the ions. The existence and the high stability of neutral \[C-2,P-2] were proved by a neutralization-reionization (NR) experiment. Although an unambiguous structural assignment of the neutral species cannot be drawn, both theory and experiment suggest that the long-sought neutral, linear PCCP 1 is generated using the NR technique.