10 resultados para regression algorithm

em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Single-species management objectives may not be consistent within mixed fisheries. They may lead species to unsafe situations, promote discarding of over-quota and/or misreporting of catches. We provide an algorithm for characterising bio-economic reference points for a mixed fishery as the steady-state solution of a dynamic optimal management problem. The optimisation problem takes into account: i) that species are fishing simultaneously in unselective fishing operations and ii)intertemporal discounting and fleet costs to relate reference points to discounted economic profits along optimal trajectories. We illustrate how the algorithm can be implemented by applying it to the European Northern Stock of Hake (Merluccius merluccius), where fleets also capture Northern megrim (Lepidorhombus whiffiagonis) and Northern anglerfish (Lophius piscatorius and Lophius budegassa). We find that optimal mixed management leads to a target reference point that is quite similar to the 2/3 of the Fmsy single-species (hake) target. Mixed management is superior to singlespecies management because it leads the fishery to higher discounted profits with higher long-term SSB for all species. We calculate that the losses due to the use of the Fmsy single-species (hake) target in this mixed fishery account for 11.4% of total discounted profits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This project introduces an improvement of the vision capacity of the robot Robotino operating under ROS platform. A method for recognizing object class using binary features has been developed. The proposed method performs a binary classification of the descriptors of each training image to characterize the appearance of the object class. It presents the use of the binary descriptor based on the difference of gray intensity of the pixels in the image. It shows that binary features are suitable to represent object class in spite of the low resolution and the weak information concerning details of the object in the image. It also introduces the use of a boosting method (Adaboost) of feature selection al- lowing to eliminate redundancies and noise in order to improve the performance of the classifier. Finally, a kernel classifier SVM (Support Vector Machine) is trained with the available database and applied for predictions on new images. One possible future work is to establish a visual servo-control that is to say the reac- tion of the robot to the detection of the object.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Primary distal renal tubular acidosis (dRTA) caused by mutations in the genes that codify for the H+ -ATPase pump subunits is a heterogeneous disease with a poor phenotype-genotype correlation. Up to now, large cohorts of dRTA Tunisian patients have not been analyzed, and molecular defects may differ from those described in other ethnicities. We aim to identify molecular defects present in the ATP6V1B1, ATP6V0A4 and SLC4A1 genes in a Tunisian cohort, according to the following algorithm: first, ATP6V1B1 gene analysis in dRTA patients with sensorineural hearing loss (SNHL) or unknown hearing status. Afterwards, ATP6V0A4 gene study in dRTA patients with normal hearing, and in those without any structural mutation in the ATP6V1B1 gene despite presenting SNHL. Finally, analysis of the SLC4A1 gene in those patients with a negative result for the previous studies. Methods: 25 children (19 boys) with dRTA from 20 families of Tunisian origin were studied. DNAs were extracted by the standard phenol/chloroform method. Molecular analysis was performed by PCR amplification and direct sequencing. Results: In the index cases, ATP6V1B1 gene screening resulted in a mutation detection rate of 81.25%, which increased up to 95% after ATP6V0A4 gene analysis. Three ATP6V1B1 mutations were observed: one frameshift mutation (c.1155dupC; p.Ile386fs), in exon 12; a G to C single nucleotide substitution, on the acceptor splicing site (c.175-1G > C; p.?) in intron 2, and one novel missense mutation (c. 1102G > A; p. Glu368Lys), in exon 11. We also report four mutations in the ATP6V0A4 gene: one single nucleotide deletion in exon 13 (c.1221delG; p. Met408Cysfs* 10); the nonsense c.16C > T; p.Arg6*, in exon 3; and the missense changes c.1739 T > C; p.Met580Thr, in exon 17 and c.2035G > T; p.Asp679Tyr, in exon 19. Conclusion: Molecular diagnosis of ATP6V1B1 and ATP6V0A4 genes was performed in a large Tunisian cohort with dRTA. We identified three different ATP6V1B1 and four different ATP6V0A4 mutations in 25 Tunisian children. One of them, c.1102G > A; p.Glu368Lys in the ATP6V1B1 gene, had not previously been described. Among deaf since childhood patients, 75% had the ATP6V1B1 gene c. 1155dupC mutation in homozygosis. Based on the results, we propose a new diagnostic strategy to facilitate the genetic testing in North Africans with dRTA and SNHL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, reanalysis fields from the ECMWF have been statistically downscaled to predict from large-scale atmospheric fields, surface moisture flux and daily precipitation at two observatories (Zaragoza and Tortosa, Ebro Valley, Spain) during the 1961-2001 period. Three types of downscaling models have been built: (i) analogues, (ii) analogues followed by random forests and (iii) analogues followed by multiple linear regression. The inputs consist of data (predictor fields) taken from the ERA-40 reanalysis. The predicted fields are precipitation and surface moisture flux as measured at the two observatories. With the aim to reduce the dimensionality of the problem, the ERA-40 fields have been decomposed using empirical orthogonal functions. Available daily data has been divided into two parts: a training period used to find a group of about 300 analogues to build the downscaling model (1961-1996) and a test period (19972001), where models' performance has been assessed using independent data. In the case of surface moisture flux, the models based on analogues followed by random forests do not clearly outperform those built on analogues plus multiple linear regression, while simple averages calculated from the nearest analogues found in the training period, yielded only slightly worse results. In the case of precipitation, the three types of model performed equally. These results suggest that most of the models' downscaling capabilities can be attributed to the analogues-calculation stage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Consensus development techniques were used in the late 1980s to create explicit criteria for the appropriateness of cataract extraction. We developed a new appropriateness of indications tool for cataract following the RAND method. We tested the validity of our panel results. Methods: Criteria were developed using a modified Delphi panel judgment process. A panel of 12 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the influence of all variables on the final panel score using linear and logistic regression models. The explicit criteria developed were summarized by classification and regression tree analysis. Results: Of the 765 indications evaluated by the main panel in the second round, 32.9% were found appropriate, 30.1% uncertain, and 37% inappropriate. Agreement was found in 53% of the indications and disagreement in 0.9%. Seven variables were considered to create the indications and divided into three groups: simple cataract, with diabetic retinopathy, or with other ocular pathologies. The preoperative visual acuity in the cataractous eye and visual function were the variables that best explained the panel scoring. The panel results were synthesized and presented in three decision trees. Misclassification error in the decision trees, as compared with the panel original criteria, was 5.3%. Conclusion: The parameters tested showed acceptable validity for an evaluation tool. These results support the use of this indication algorithm as a screening tool for assessing the appropriateness of cataract extraction in field studies and for the development of practice guidelines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the convergence of a remote iterative learning control system subject to data dropouts. The system is composed by a set of discrete-time multiple input-multiple output linear models, each one with its corresponding actuator device and its sensor. Each actuator applies the input signals vector to its corresponding model at the sampling instants and the sensor measures the output signals vector. The iterative learning law is processed in a controller located far away of the models so the control signals vector has to be transmitted from the controller to the actuators through transmission channels. Such a law uses the measurements of each model to generate the input vector to be applied to its subsequent model so the measurements of the models have to be transmitted from the sensors to the controller. All transmissions are subject to failures which are described as a binary sequence taking value 1 or 0. A compensation dropout technique is used to replace the lost data in the transmission processes. The convergence to zero of the errors between the output signals vector and a reference one is achieved as the number of models tends to infinity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This document aims to describe an update of the implementation of the J48Consolidated class within WEKA platform. The J48Consolidated class implements the CTC algorithm [2][3] which builds a unique decision tree based on a set of samples. The J48Consolidated class extends WEKA’s J48 class which implements the well-known C4.5 algorithm. This implementation was described in the technical report "J48Consolidated: An implementation of CTC algorithm for WEKA". The main, but not only, change in this update is the integration of the notion of coverage in order to determine the number of samples to be generated to build a consolidated tree. We define coverage as the percentage of examples of the training sample present in –or covered by– the set of generated subsamples. So, depending on the type of samples that we use, we will need more or less samples in order to achieve a specific value of coverage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The CTC algorithm, Consolidated Tree Construction algorithm, is a machine learning paradigm that was designed to solve a class imbalance problem, a fraud detection problem in the area of car insurance [1] where, besides, an explanation about the classification made was required. The algorithm is based on a decision tree construction algorithm, in this case the well-known C4.5, but it extracts knowledge from data using a set of samples instead of a single one as C4.5 does. In contrast to other methodologies based on several samples to build a classifier, such as bagging, the CTC builds a single tree and as a consequence, it obtains comprehensible classifiers. The main motivation of this implementation is to make public and available an implementation of the CTC algorithm. With this purpose we have implemented the algorithm within the well-known WEKA data mining environment http://www.cs.waikato.ac.nz/ml/weka/). WEKA is an open source project that contains a collection of machine learning algorithms written in Java for data mining tasks. J48 is the implementation of C4.5 algorithm within the WEKA package. We called J48Consolidated to the implementation of CTC algorithm based on the J48 Java class.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Intratumor heterogeneity may be responsible of the unpredictable aggressive clinical behavior that some clear cell renal cell carcinomas display. This clinical uncertainty may be caused by insufficient sampling, leaving out of histological analysis foci of high grade tumor areas. Although molecular approaches are providing important information on renal intratumor heterogeneity, a focus on this topic from the practicing pathologist' perspective is still pending. Methods: Four distant tumor areas of 40 organ-confined clear cell renal cell carcinomas were selected for histopathological and immunohistochemical evaluation. Tumor size, cell type (clear/granular), Fuhrman's grade, Staging, as well as immunostaining with Snail, ZEB1, Twist, Vimentin, E-cadherin, beta-catenin, PTEN, p-Akt, p110 alpha, and SETD2, were analyzed for intratumor heterogeneity using a classification and regression tree algorithm. Results: Cell type and Fuhrman's grade were heterogeneous in 12.5 and 60 % of the tumors, respectively. If cell type was homogeneous (clear cell) then the tumors were low-grade in 88.57 % of cases. Immunostaining heterogeneity was significant in the series and oscillated between 15 % for p110a and 80 % for Snail. When Snail immunostaining was homogeneous the tumor was histologically homogeneous in 100 % of cases. If Snail was heterogeneous, the tumor was heterogeneous in 75 % of the cases. Average tumor diameter was 4.3 cm. Tumors larger than 3.7 cm were heterogeneous for Vimentin immunostaining in 72.5 % of cases. Tumors displaying negative immunostaining for both ZEB1 and Twist were low grade in 100 % of the cases. Conclusions: Intratumor heterogeneity is a common event in clear cell renal cell carcinoma, which can be monitored by immunohistochemistry in routine practice. Snail seems to be particularly useful in the identification of intratumor heterogeneity. The suitability of current sampling protocols in renal cancer is discussed.