814 resultados para Multi-objective functions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The occurrence of rockbursts was quite common during active mining periods in the Champion reef mines of Kolar gold fields, India. Among the major rockbursts, the ‘area-rockbursts’ were unique both in regard to their spatio-temporal distribution and the extent of damage caused to the mine workings. A detailed study of the spatial clustering of 3 major area-rockbursts (ARB) was carried out using a multi-fractal technique involving generalized correlation integral functions. The spatial distribution analysis of all 3 area-rockbursts showed that they are heterogeneous. The degree of heterogeneity (D2 – D∞) in the cases of ARB-I, II and III were found to be 0.52, 0.37 and 0.41 respectively. These differences in fractal structure indicate that the ARBs of the present study were fully controlled by different heterogeneous stress fields associated with different mining and geological conditions. The present study clearly showed the advantages of the application of multi-fractals to seismic data and to characterise, analyse and examine the area-rockbursts and their causative factors in the Kolar gold mines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'obiettivo principale della politica di sicurezza alimentare è quello di garantire la salute dei consumatori attraverso regole e protocolli di sicurezza specifici. Al fine di rispondere ai requisiti di sicurezza alimentare e standardizzazione della qualità, nel 2002 il Parlamento Europeo e il Consiglio dell'UE (Regolamento (CE) 178/2002 (CE, 2002)), hanno cercato di uniformare concetti, principi e procedure in modo da fornire una base comune in materia di disciplina degli alimenti e mangimi provenienti da Stati membri a livello comunitario. La formalizzazione di regole e protocolli di standardizzazione dovrebbe però passare attraverso una più dettagliata e accurata comprensione ed armonizzazione delle proprietà globali (macroscopiche), pseudo-locali (mesoscopiche), ed eventualmente, locali (microscopiche) dei prodotti alimentari. L'obiettivo principale di questa tesi di dottorato è di illustrare come le tecniche computazionali possano rappresentare un valido supporto per l'analisi e ciò tramite (i) l’applicazione di protocolli e (ii) miglioramento delle tecniche ampiamente applicate. Una dimostrazione diretta delle potenzialità già offerte dagli approcci computazionali viene offerta nel primo lavoro in cui un virtual screening basato su docking è stato applicato al fine di valutare la preliminare xeno-androgenicità di alcuni contaminanti alimentari. Il secondo e terzo lavoro riguardano lo sviluppo e la convalida di nuovi descrittori chimico-fisici in un contesto 3D-QSAR. Denominata HyPhar (Hydrophobic Pharmacophore), la nuova metodologia così messa a punto è stata usata per esplorare il tema della selettività tra bersagli molecolari strutturalmente correlati e ha così dimostrato di possedere i necessari requisiti di applicabilità e adattabilità in un contesto alimentare. Nel complesso, i risultati ci permettono di essere fiduciosi nel potenziale impatto che le tecniche in silico potranno avere nella identificazione e chiarificazione di eventi molecolari implicati negli aspetti tossicologici e nutrizionali degli alimenti.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several parties (stakeholders) are involved in a construction project. The conventional Risk Management Process (RMP) manages risks from a single party perspective, which does not give adequate consideration to the needs of others. The objective of multi-party risk management is to assist decision-makers in managing risk systematically and most efficiently in a multi-party environment. Multi-party Risk Management Processes (MRMP) consist of risk identification, structuring, analysis and developing responses from all party perspectives. The MRMP has been applied to a cement plant construction project in Thailand to demonstrate its effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To spatially and temporally characterise the cortical contrast response function to pattern onset stimuli in humans. Methods: Magnetoencephalography (MEG) was used to investigate the human cortical contrast response function to pattern onset stimuli with high temporal and spatial resolution. A beamformer source reconstruction approach was used to spatially localise and identify the time courses of activity at various visual cortical loci. Results: Consistent with the findings of previous studies, MEG beamformer analysis revealed two simultaneous generators of the pattern onset evoked response. These generators arose from anatomically discrete locations in striate and extra-striate visual cortex. Furthermore, these loci demonstrated notably distinct contrast response functions, with striate cortex increasing approximately linearly with contrast, whilst extra-striate visual cortex followed a saturating function. Conclusions: The generators that underlie the pattern onset visual evoked response arise from two distinct regions in striate and extra-striate visual cortex. Significance: The spatially, temporally and functionally distinct mechanisms of contrast processing within the visual cortex may account for the disparate results observed across earlier studies and assist in elucidating causal mechanisms of aberrant contrast processing in neurological disorders. © 2005 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Information Filtering (IF) a user may be interested in several topics in parallel. But IF systems have been built on representational models derived from Information Retrieval and Text Categorization, which assume independence between terms. The linearity of these models results in user profiles that can only represent one topic of interest. We present a methodology that takes into account term dependencies to construct a single profile representation for multiple topics, in the form of a hierarchical term network. We also introduce a series of non-linear functions for evaluating documents against the profile. Initial experiments produced positive results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project has been undertaken for Hamworthy Hydraulics Limited. Its objective was to design and develop a controller package for a variable displacement, hydraulic pump for use mainly on mobile earth moving machinery. A survey was undertaken of control options used in practice and from this a design specification was formulated, the successful implementation of which would give Hamworthy an advantage over its competitors. Two different modes for the controller were envisaged. One consisted of using conventional hydro-mechanics and the other was based upon a microprocessor. To meet short term customer prototype requirements the first section of work was the realisation of the hydro-mechanical system. Mathematical models were made to evaluate controller stability and hence aid their design. The final package met the requirements of the specification and a single version could operate all sizes of variable displacement pumps in the Hamworthy range. The choice of controller options and combinations totalled twenty-four. The hydro-mechanical controller was complex and it was realised that a micro-processor system would allow all options to be implemented with just one design of hardware, thus greatly simplifying production. The final section of this project was to determine whether such a design was feasible. This entailed finding cheap, reliable transducers, using mathematical models to predict electro-hydraulic interface stability, testing such interfaces and finally incorporating a micro-processor in an interactive control loop. The study revealed that such a system was technically possible but it would cost 60% more than its hydro-mechanical counterpart. It was therefore concluded that, in the short term, for the markets considered, the hydro-mechanical design was the better solution. Regarding the micro-processor system the final conclusion was that, because the relative costs of the two systems are decreasing, the electro-hydraulic controller will gradually become more attractive and therefore Hamworthy should continue with its development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of remote sensing platforms and sensors rises almost every year, yet much work on the interpretation of land cover is still carried out using either single images or images from the same source taken at different dates. Two questions could be asked of this proliferation of images: can the information contained in different scenes be used to improve the classification accuracy and, what is the best way to combine the different imagery? Two of these multiple image sources are MODIS on the Terra platform and ETM+ on board Landsat7, which are suitably complementary. Daily MODIS images with 36 spectral bands in 250-1000 m spatial resolution and seven spectral bands of ETM+ with 30m and 16 days spatial and temporal resolution respectively are available. In the UK, cloud cover may mean that only a few ETM+ scenes may be available for any particular year and these may not be at the time of year of most interest. The MODIS data may provide information on land cover over the growing season, such as harvest dates, that is not present in the ETM+ data. Therefore, the primary objective of this work is to develop a methodology for the integration of medium spatial resolution Landsat ETM+ image, with multi-temporal, multi-spectral, low-resolution MODIS \Terra images, with the aim of improving the classification of agricultural land. Additionally other data may also be incorporated such as field boundaries from existing maps. When classifying agricultural land cover of the type seen in the UK, where crops are largely sown in homogenous fields with clear and often mapped boundaries, the classification is greatly improved using the mapped polygons and utilising the classification of the polygon as a whole as an apriori probability in classifying each individual pixel using a Bayesian approach. When dealing with multiple images from different platforms and dates it is highly unlikely that the pixels will be exactly co-registered and these pixels will contain a mixture of different real world land covers. Similarly the different atmospheric conditions prevailing during the different days will mean that the same emission from the ground will give rise to different sensor reception. Therefore, a method is presented with a model of the instantaneous field of view and atmospheric effects to enable different remote sensed data sources to be integrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The starting point of this research was the belief that manufacturing and similar industries need help with the concept of e-business, especially in assessing the relevance of possible e-business initiatives. The research hypotheses was that it should be possible to produce a systematic model that defines, at a useful level of detail, the probable e-business requirements of an organisation based on objective criteria with an accuracy of 85%-90%. This thesis describes the development and validation of such a model. A preliminary model was developed from a variety of sources, including a survey of current and planned e-business activity and representative examples of e-business material produced by e-business solution providers. The model was subject to a process of testing and refinement based on recursive case studies, with controls over the improving accuracy and stability of the model. Useful conclusions were also possible as to the relevance of e-business functions to the case study participants themselves. Techniques were evolved to synthesise the e-business requirements of an organisation and present them at a management summary level of detail. The results of applying these techniques to all the case studies used in this research were discussed. The conclusion of the research was that the case study methodology employed was successful. A model was achieved suitable for practical application in a manufacturing organisation requiring help with a requirements definition process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-agent systems are complex systems comprised of multiple intelligent agents that act either independently or in cooperation with one another. Agent-based modelling is a method for studying complex systems like economies, societies, ecologies etc. Due to their complexity, very often mathematical analysis is limited in its ability to analyse such systems. In this case, agent-based modelling offers a practical, constructive method of analysis. The objective of this book is to shed light on some emergent properties of multi-agent systems. The authors focus their investigation on the effect of knowledge exchange on the convergence of complex, multi-agent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lutein and zeaxanthin are lipid-soluble antioxidants found within the macula region of the retina. Links have been suggested between increased levels of these carotenoids and reduced risk for age-related macular disease (ARMD). Therefore, the effect of lutein-based supplementation on retinal and visual function in people with early stages of ARMD (age-related maculopathy, ARM) was assessed using multi-focal electroretinography (mfERG), contrast sensitivity and distance visual acuity. A total of fourteen participants were randomly allocated to either receive a lutein-based oral supplement (treated group) or no supplement (non-treated group). There were eight participants aged between 56 and 81 years (65·50 (sd 9·27) years) in the treated group and six participants aged between 61 and 83 years (69·67 (sd 7·52) years) in the non-treated group. Sample sizes provided 80 % power at the 5 % significance level. Participants attended for three visits (0, 20 and 40 weeks). At 60 weeks, the treated group attended a fourth visit following 20 weeks of supplement withdrawal. No changes were seen between the treated and non-treated groups during supplementation. Although not clinically significant, mfERG ring 3 N2 latency (P= 0·041) and ring 4 P1 latency (P= 0·016) increased, and a trend for reduction of mfERG amplitudes was observed in rings 1, 3 and 4 on supplement withdrawal. The statistically significant increase in mfERG latencies and the trend for reduced mfERG amplitudes on withdrawal are encouraging and may suggest a potentially beneficial effect of lutein-based supplementation in ARM-affected eyes. Copyright © 2012 The Authors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the face of global population growth and the uneven distribution of water supply, a better knowledge of the spatial and temporal distribution of surface water resources is critical. Remote sensing provides a synoptic view of ongoing processes, which addresses the intricate nature of water surfaces and allows an assessment of the pressures placed on aquatic ecosystems. However, the main challenge in identifying water surfaces from remotely sensed data is the high variability of spectral signatures, both in space and time. In the last 10 years only a few operational methods have been proposed to map or monitor surface water at continental or global scale, and each of them show limitations. The objective of this study is to develop and demonstrate the adequacy of a generic multi-temporal and multi-spectral image analysis method to detect water surfaces automatically, and to monitor them in near-real-time. The proposed approach, based on a transformation of the RGB color space into HSV, provides dynamic information at the continental scale. The validation of the algorithm showed very few omission errors and no commission errors. It demonstrates the ability of the proposed algorithm to perform as effectively as human interpretation of the images. The validation of the permanent water surface product with an independent dataset derived from high resolution imagery, showed an accuracy of 91.5% and few commission errors. Potential applications of the proposed method have been identified and discussed. The methodology that has been developed 27 is generic: it can be applied to sensors with similar bands with good reliability, and minimal effort. Moreover, this experiment at continental scale showed that the methodology is efficient for a large range of environmental conditions. Additional preliminary tests over other continents indicate that the proposed methodology could also be applied at the global scale without too many difficulties

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Servitization is the process by which manufacturers add services to their product offerings and even replace products with services. The capabilities necessary to develop and deliver advanced services as part of servitization are often discussed in the literature from the manufacturer’s perspective, e.g., having a service-focused culture or the ability to sell solutions. Recent research has acknowledged the important role of customers and, to a lesser extent, other actors (e.g., intermediaries) in bringing about successful servitization, particularly for use-oriented and results-oriented advanced services. The objective of this study is to identify the capabilities required to successful develop advanced services as part of servitization by considering the perspective of manufacturers, intermediaries and customers. This study involved interviews with 33 managers in 28 large UK-based companies from these three groups, about servitization capabilities. The findings suggest that there are eight broad capabilities that are important for advanced services; 1) personnel with expertise and deep technical product knowledge, 2) methodologies for improving operational processes, helping to manage risk and reduce costs, 3) the evolution from being a product- focused manufacturer to embracing a services culture, 4) developing trusting relationships with other actors in the network to support the delivery of advanced services, 5) new innovation activities focused on financing contracts (e.g., ‘gain share’) and technology implementation (e.g., Web-based applications), 6) customer intimacy through understanding their business challenges in order to develop suitable solutions, 7) extensive infrastructure (e.g., personnel, service centres) to deliver a local service, and 8) the ability to tailor service offerings to each customer’s requirements and deliver these responsively to changing needs. The capabilities required to develop and deliver advanced services align to a need to enhance the operational performance of supplied products throughout their lifecycles and as such require greater investment than the capabilities for base and intermediate services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Link quality-based rate adaptation has been widely used for IEEE 802.11 networks. However, network performance is affected by both link quality and random channel access. Selection of transmit modes for optimal link throughput can cause medium access control (MAC) throughput loss. In this paper, we investigate this issue and propose a generalised cross-layer rate adaptation algorithm. It considers jointly link quality and channel access to optimise network throughput. The objective is to examine the potential benefits by cross-layer design. An efficient analytic model is proposed to evaluate rate adaptation algorithms under dynamic channel and multi-user access environments. The proposed algorithm is compared to link throughput optimisation-based algorithm. It is found rate adaptation by optimising link layer throughput can result in large performance loss, which cannot be compensated by the means of optimising MAC access mechanism alone. Results show cross-layer design can achieve consistent and considerable performance gains of up to 20%. It deserves to be exploited in practical design for IEEE 802.11 networks.