938 resultados para multi-classification constrained-covariance regres


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eddy covariance measurements of the turbulent sensible heat, latent heat and carbon dioxide fluxes for 12 months (2011–2012) are reported for the first time for a suburban area in the UK. The results from Swindon are comparable to suburban studies of similar surface cover elsewhere but reveal large seasonal variability. Energy partitioning favours turbulent sensible heat during summer (midday Bowen ratio 1.4–1.6) and latent heat in winter (0.05–0.7). A significant proportion of energy is stored (and released) by the urban fabric and the estimated anthropogenic heat flux is small but non-negligible (0.5–0.9 MJ m−2 day−1). The sensible heat flux is negative at night and for much of winter daytimes, reflecting the suburban nature of the site (44% vegetation) and relatively low built fraction (16%). Latent heat fluxes appear to be water limited during a dry spring in both 2011 and 2012, when the response of the surface to moisture availability can be seen on a daily timescale. Energy and other factors are more relevant controls at other times; at night the wind speed is important. On average, surface conductance follows a smooth, asymmetrical diurnal course peaking at around 6–9 mm s−1, but values are larger and highly variable in wet conditions. The combination of natural (vegetative) and anthropogenic (emission) processes is most evident in the temporal variation of the carbon flux: significant photosynthetic uptake is seen during summer, whilst traffic and building emissions explain peak release in winter (9.5 g C m−2 day−1). The area is a net source of CO2 annually. Analysis by wind direction highlights the role of urban vegetation in promoting evapotranspiration and offsetting CO2 emissions, especially when contrasted against peak traffic emissions from sectors with more roads. Given the extent of suburban land use, these results have important implications for understanding urban energy, water and carbon dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Design and analysis of conceptually different cooling systems for the human heart preservation are numerically investigated. A heart cooling container with required connections was designed for a normal size human heart. A three-dimensional, high resolution human heart geometric model obtained from CT-angio data was used for simulations. Nine different cooling designs are introduced in this research. The first cooling design (Case 1) used a cooling gelatin only outside of the heart. In the second cooling design (Case 2), the internal parts of the heart were cooled via pumping a cooling liquid inside both the heart’s pulmonary and systemic circulation systems. An unsteady conjugate heat transfer analysis is performed to simulate the temperature field variations within the heart during the cooling process. Case 3 simulated the currently used cooling method in which the coolant is stagnant. Case 4 was a combination of Case 1 and Case 2. A linear thermoelasticity analysis was performed to assess the stresses applied on the heart during the cooling process. In Cases 5 through 9, the coolant solution was used for both internal and external cooling. For external circulation in Case 5 and Case 6, two inlets and two outlets were designed on the walls of the cooling container. Case 5 used laminar flows for coolant circulations inside and outside of the heart. Effects of turbulent flow on cooling of the heart were studied in Case 6. In Case 7, an additional inlet was designed on the cooling container wall to create a jet impinging the hot region of the heart’s wall. Unsteady periodic inlet velocities were applied in Case 8 and Case 9. The average temperature of the heart in Case 5 was +5.0oC after 1500 s of cooling. Multi-objective constrained optimization was performed for Case 5. Inlet velocities for two internal and one external coolant circulations were the three design variables for optimization. Minimizing the average temperature of the heart, wall shear stress and total volumetric flow rates were the three objectives. The only constraint was to keep von Mises stress below the ultimate tensile stress of the heart’s tissue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La quantificazione non invasiva delle caratteristiche microstrutturali del cervello, utilizzando la diffusion MRI (dMRI), è diventato un campo sempre più interessante e complesso negli ultimi due decenni. Attualmente la dMRI è l’unica tecnica che permette di sondare le proprietà diffusive dell’acqua, in vivo, grazie alla quale è possibile inferire informazioni su scala mesoscopica, scala in cui si manifestano le prime alterazioni di malattie neurodegenerative, da tale tipo di dettaglio è potenzialmente possibile sviluppare dei biomarcatori specifici per le fasi iniziali di malattie neurodegenerative. L’evoluzione hardware degli scanner clinici, hanno permesso lo sviluppo di modelli di dMRI avanzati basati su acquisizioni multi shell, i quali permettono di ovviare alle limitazioni della Diffusion Tensor Imaging, in particolare tali modelli permettono una migliore ricostruzione trattografica dei fasci di sostanza bianca, grazie ad un’accurata stima della Orientation Distribution Function e la stima quantitativa di parametri che hanno permesso di raggiungere una miglior comprensione della microstruttura della sostanza bianca e delle sue eventuali deviazioni dalla norma. L’identificazione di biomarcatori sensibili alle prime alterazioni microstrutturali delle malattie neurodegenerative è uno degli obbiettivi principali di tali modelli, in quanto consentirebbero una diagnosi precoce e di conseguenza un trattamento terapeutico tempestivo prima di una significante perdità cellulare. La trattazione è suddivisa in una prima parte di descrizione delle nozioni fisiche di base della dMRI, dell’imaging del tensore di diffusione e le relative limitazioni, ed in una seconda parte dove sono analizzati tre modelli avanzati di dMRI: Diffusion Kurtosis Imaging, Neurite Orientation Dispersion and Density Imaging e Multi Shell Multi Tissue Constrained Spherical Deconvolution. L'obiettivo della trattazione è quello di offrire una panoramica sulle potenzialità di tali modelli.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Urban regeneration is more and more a “universal issue” and a crucial factor in the new trends of urban planning. It is no longer only an area of study and research; it became part of new urban and housing policies. Urban regeneration involves complex decisions as a consequence of the multiple dimensions of the problems that include special technical requirements, safety concerns, socio-economic, environmental, aesthetic, and political impacts, among others. This multi-dimensional nature of urban regeneration projects and their large capital investments justify the development and use of state-of-the-art decision support methodologies to assist decision makers. This research focuses on the development of a multi-attribute approach for the evaluation of building conservation status in urban regeneration projects, thus supporting decision makers in their analysis of the problem and in the definition of strategies and priorities of intervention. The methods presented can be embedded into a Geographical Information System for visualization of results. A real-world case study was used to test the methodology, whose results are also presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A construction project is a group of discernible tasks or activities that are conduct-ed in a coordinated effort to accomplish one or more objectives. Construction projects re-quire varying levels of cost, time and other resources. To plan and schedule a construction project, activities must be defined sufficiently. The level of detail determines the number of activities contained within the project plan and schedule. So, finding feasible schedules which efficiently use scarce resources is a challenging task within project management. In this context, the well-known Resource Constrained Project Scheduling Problem (RCPSP) has been studied during the last decades. In the RCPSP the activities of a project have to be scheduled such that the makespan of the project is minimized. So, the technological precedence constraints have to be observed as well as limitations of the renewable resources required to accomplish the activities. Once started, an activity may not be interrupted. This problem has been extended to a more realistic model, the multi-mode resource con-strained project scheduling problem (MRCPSP), where each activity can be performed in one out of several modes. Each mode of an activity represents an alternative way of combining different levels of resource requirements with a related duration. Each renewable resource has a limited availability for the entire project such as manpower and machines. This paper presents a hybrid genetic algorithm for the multi-mode resource-constrained pro-ject scheduling problem, in which multiple execution modes are available for each of the ac-tivities of the project. The objective function is the minimization of the construction project completion time. To solve the problem, is applied a two-level genetic algorithm, which makes use of two separate levels and extend the parameterized schedule generation scheme. It is evaluated the quality of the schedules and presents detailed comparative computational re-sults for the MRCPSP, which reveal that this approach is a competitive algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a genetic algorithm for the resource constrained multi-project scheduling problem. The chromosome representation of the problem is based on random keys. The schedules are constructed using a heuristic that builds parameterized active schedules based on priorities, delay times, and release dates defined by the genetic algorithm. The approach is tested on a set of randomly generated problems. The computational results validate the effectiveness of the proposed algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a genetic algorithm for the multimode resource-constrained project scheduling problem (MRCPSP), in which multiple execution modes are available for each of the activities of the project. The objective function is the minimization of the construction project completion time. To solve the problem, is applied a two-level genetic algorithm, which makes use of two separate levels and extend the parameterized schedule generation scheme by introducing an improvement procedure. It is evaluated the quality of the schedule and present detailed comparative computational results for the MRCPSP, which reveal that this approach is a competitive algorithm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Given the limitations of different types of remote sensing images, automated land-cover classifications of the Amazon várzea may yield poor accuracy indexes. One way to improve accuracy is through the combination of images from different sensors, by either image fusion or multi-sensor classifications. Therefore, the objective of this study was to determine which classification method is more efficient in improving land cover classification accuracies for the Amazon várzea and similar wetland environments - (a) synthetically fused optical and SAR images or (b) multi-sensor classification of paired SAR and optical images. Land cover classifications based on images from a single sensor (Landsat TM or Radarsat-2) are compared with multi-sensor and image fusion classifications. Object-based image analyses (OBIA) and the J.48 data-mining algorithm were used for automated classification, and classification accuracies were assessed using the kappa index of agreement and the recently proposed allocation and quantity disagreement measures. Overall, optical-based classifications had better accuracy than SAR-based classifications. Once both datasets were combined using the multi-sensor approach, there was a 2% decrease in allocation disagreement, as the method was able to overcome part of the limitations present in both images. Accuracy decreased when image fusion methods were used, however. We therefore concluded that the multi-sensor classification method is more appropriate for classifying land cover in the Amazon várzea.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We characterize the capacity-achieving input covariance for multi-antenna channels known instantaneously at the receiver and in distribution at the transmitter. Our characterization, valid for arbitrary numbers of antennas, encompasses both the eigenvectors and the eigenvalues. The eigenvectors are found for zero-mean channels with arbitrary fading profiles and a wide range of correlation and keyhole structures. For the eigenvalues, in turn, we present necessary and sufficient conditions as well as an iterative algorithm that exhibits remarkable properties: universal applicability, robustness and rapid convergence. In addition, we identify channel structures for which an isotropic input achieves capacity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

When dealing with multi-angular image sequences, problems of reflectance changes due either to illumination and acquisition geometry, or to interactions with the atmosphere, naturally arise. These phenomena interplay with the scene and lead to a modification of the measured radiance: for example, according to the angle of acquisition, tall objects may be seen from top or from the side and different light scatterings may affect the surfaces. This results in shifts in the acquired radiance, that make the problem of multi-angular classification harder and might lead to catastrophic results, since surfaces with the same reflectance return significantly different signals. In this paper, rather than performing atmospheric or bi-directional reflection distribution function (BRDF) correction, a non-linear manifold learning approach is used to align data structures. This method maximizes the similarity between the different acquisitions by deforming their manifold, thus enhancing the transferability of classification models among the images of the sequence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This letter presents advanced classification methods for very high resolution images. Efficient multisource information, both spectral and spatial, is exploited through the use of composite kernels in support vector machines. Weighted summations of kernels accounting for separate sources of spectral and spatial information are analyzed and compared to classical approaches such as pure spectral classification or stacked approaches using all the features in a single vector. Model selection problems are addressed, as well as the importance of the different kernels in the weighted summation.