909 resultados para EVALUATION MODEL
Resumo:
Contributed to: "Measuring the Changes": 13th FIG International Symposium on Deformation Measurements and Analysis; 4th IAG Symposium on Geodesy for Geotechnical and Structural Enginering (Lisbon, Portugal, May 12-15, 2008).
Resumo:
Background -- N-(4-hydroxyphenyl)retinamide (4-HPR, fenretinide) is a synthetic retinoid with potent pro-apoptotic activity against several types of cancer, but little is known regarding mechanisms leading to chemoresistance. Ceramide and, more recently, other sphingolipid species (e.g., dihydroceramide and dihydrosphingosine) have been implicated in 4-HPR-mediated tumor cell death. Because sphingolipid metabolism has been reported to be altered in drug-resistant tumor cells, we studied the implication of sphingolipids in acquired resistance to 4-HPR based on an acute lymphoblastic leukemia model. Methods -- CCRF-CEM cell lines resistant to 4-HPR were obtained by gradual selection. Endogenous sphingolipid profiles and in situ enzymatic activities were determined by LC/MS, and resistance to 4-HPR or to alternative treatments was measured using the XTT viability assay and annexin V-FITC/propidium iodide labeling. Results -- No major crossresistance was observed against other antitumoral compounds (i.e. paclitaxel, cisplatin, doxorubicin hydrochloride) or agents (i.e. ultra violet C, hydrogen peroxide) also described as sphingolipid modulators. CCRF-CEM cell lines resistant to 4-HPR exhibited a distinctive endogenous sphingolipid profile that correlated with inhibition of dihydroceramide desaturase. Cells maintained acquired resistance to 4-HPR after the removal of 4-HPR though the sphingolipid profile returned to control levels. On the other hand, combined treatment with sphingosine kinase inhibitors (unnatural (dihydro)sphingosines ((dh)Sph)) and glucosylceramide synthase inhibitor (PPMP) in the presence or absence of 4-HPR increased cellular (dh)Sph (but not ceramide) levels and were highly toxic for both parental and resistant cells. Conclusions -- In the leukemia model, acquired resistance to 4-HPR is selective and persists in the absence of sphingolipid profile alteration. Therapeutically, the data demonstrate that alternative sphingolipid-modulating antitumoral strategies are suitable for both 4-HPR-resistant and sensitive leukemia cells. Thus, whereas sphingolipids may not be critical for maintaining resistance to 4-HPR, manipulation of cytotoxic sphingolipids should be considered a viable approach for overcoming resistance.
Resumo:
In this article we describe the methodology developed for the semiautomatic annotation of EPEC-RolSem, a Basque corpus labeled at predicate level following the PropBank-VerbNet model. The methodology presented is the product of detailed theoretical study of the semantic nature of verbs in Basque and of their similarities and differences with verbs in other languages. As part of the proposed methodology, we are creating a Basque lexicon on the PropBank-VerbNet model that we have named the Basque Verb Index (BVI). Our work thus dovetails the general trend toward building lexicons from tagged corpora that is clear in work conducted for other languages. EPEC-RolSem and BVI are two important resources for the computational semantic processing of Basque; as far as the authors are aware, they are also the first resources of their kind developed for Basque. In addition, each entry in BVI is linked to the corresponding verb-entry in well-known resources like PropBank, VerbNet, WordNet, Levin’s Classification and FrameNet. We have also implemented several automatic processes to aid in creating and annotating the BVI, including processes designed to facilitate the task of manual annotation.
Resumo:
The Cross River State (Nigeria) marine and freshwater artisanal capture fisheries are divided into 4 categories according to the type of resources being exploited. Schaefer's production model is applied to each of the fisheries to estimate the maximum sustainable yields (Ymax). The total potential yield for all the fisheries in natural waters is 178,650 tonnes/year. This potential is unlikely to be achieved as more fishermen are abandoning the occupation due to the scarcity of boats, outboard engines and nets. Even if the full potentials were realized the production would still be short of what the State should produce by about 30.5%. Investment opportunities which, if effected can help to narrow the gap between the available and the desired level of production are enumerated
Resumo:
4 p.
Resumo:
Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.
Resumo:
The Inter-American Tropical Tuna Commission (IATTC) staff has been sampling the size distributions of tunas in the eastern Pacific Ocean (EPO) since 1954, and the species composition of the catches since 2000. The IATTC staff use the data from the species composition samples, in conjunction with observer and/or logbook data, and unloading data from the canneries to estimate the total annual catches of yellowfin (Thunnus albacares), skipjack (Katsuwonus pelamis), and bigeye (Thunnus obesus) tunas. These sample data are collected based on a stratified sampling design. I propose an update of the stratification of the EPO into more homogenous areas in order to reduce the variance in the estimates of the total annual catches and incorporate the geographical shifts resulting from the expansion of the floating-object fishery during the 1990s. The sampling model used by the IATTC is a stratified two-stage (cluster) random sampling design with first stage units varying (unequal) in size. The strata are month, area, and set type. Wells, the first cluster stage, are selected to be sampled only if all of the fish were caught in the same month, same area, and same set type. Fish, the second cluster stage, are sampled for lengths, and independently, for species composition of the catch. The EPO is divided into 13 sampling areas, which were defined in 1968, based on the catch distributions of yellowfin and skipjack tunas. This area stratification does not reflect the multi-species, multi-set-type fishery of today. In order to define more homogenous areas, I used agglomerative cluster analysis to look for groupings of the size data and the catch and effort data for 2000–2006. I plotted the results from both datasets against the IATTC Sampling Areas, and then created new areas. I also used the results of the cluster analysis to update the substitution scheme for strata with catch, but no sample. I then calculated the total annual catch (and variance) by species by stratifying the data into new Proposed Sampling Areas and compared the results to those reported by the IATTC. Results showed that re-stratifying the areas produced smaller variances of the catch estimates for some species in some years, but the results were not significant.
Resumo:
[EN]This research had as primary objective to model different types of problems using linear programming and apply different methods so as to find an adequate solution to them. To achieve this objective, a linear programming problem and its dual were studied and compared. For that, linear programming techniques were provided and an introduction of the duality theory was given, analyzing the dual problem and the duality theorems. Then, a general economic interpretation was given and different optimal dual variables like shadow prices were studied through the next practical case: An aesthetic surgery hospital wanted to organize its monthly waiting list of four types of surgeries to maximize its daily income. To solve this practical case, we modelled the linear programming problem following the relationships between the primal problem and its dual. Additionally, we solved the dual problem graphically, and then we found the optimal solution of the practical case posed through its dual, following the different theorems of the duality theory. Moreover, how Complementary Slackness can help to solve linear programming problems was studied. To facilitate the solution Solver application of Excel and Win QSB programme were used.
Resumo:
We investigated age, growth, and ontogenetic effects on the proportionality of otolith size to fish size in laboratory-reared delta smelt (Hypomesus transpacificus) from the San Francisco Bay estuary. Delta smelt larvae were reared from hatching in laboratory mesocosms for 100 days. Otolith increments from known-age fish were enumerated to validate that growth increments were deposited daily and to validate the age of fish at first ring formation. Delta smelt were found to lay down daily ring increments; however, the first increment did not form until six days after hatching. The relationship between otolith size and fish size was not biased by age or growth-rate effects but did exhibit an interruption in linear growth owing to an ontogenetic shift at the postflexon stage. To back-calculate the size-at-age of individual fish, we modified the biological intercept (BI) model to account for ontogenetic changes in the otolith-size−fish-size relationship and compared the results to the time-varying growth model, as well as the modified Fry model. We found the modified BI model estimated more accurately the size-at-age from hatching to 100 days after hatching. Before back-calculating size-at-age with existing models, we recommend a critical evaluation of the effects that age, growth, and ontogeny can have on the otolith-size−fish-size relations
Resumo:
I simulated somatic growth and accompanying otolith growth using an individual-based bioenergetics model in order to examine the performance of several back-calculation methods. Four shapes of otolith radius-total length relations (OR-TL) were simulated. Ten different back-calculation equations, two different regression models of radius length, and two schemes of annulus selection were examined for a total of 20 different methods to estimate size at age from simulated data sets of length and annulus measurements. The accuracy of each of the twenty methods was evaluated by comparing the back-calculated length-at-age and the true length-at-age. The best back-calculation technique was directly related to how well the OR-TL model fitted. When the OR-TL was sigmoid shaped and all annuli were used, employing a least squares linear regression coupled with a log-transformed Lee back-calculation equation (y-intercept corrected) resulted in the least error; when only the last annulus was used, employing a direct proportionality back-calculation equation resulted in the least error. When the OR-TL was linear, employing a functional regression coupled with the Lee back-calculation equation resulted in the least error when all annuli were used, and also when only the last annulus was used. If the OR-TL was exponentially shaped, direct substitution into the fitted quadratic equation resulted in the least error when all annuli were used, and when only the last annulus was used. Finally, an asymptotically shaped OR-TL was best modeled by the individually corrected Weibull cumulative distribution function when all annuli were used, and when only the last annulus was used.
Resumo:
Over the past decade, a variety of user models have been proposed for user simulation-based reinforcement-learning of dialogue strategies. However, the strategies learned with these models are rarely evaluated in actual user trials and it remains unclear how the choice of user model affects the quality of the learned strategy. In particular, the degree to which strategies learned with a user model generalise to real user populations has not be investigated. This paper presents a series of experiments that qualitatively and quantitatively examine the effect of the user model on the learned strategy. Our results show that the performance and characteristics of the strategy are in fact highly dependent on the user model. Furthermore, a policy trained with a poor user model may appear to perform well when tested with the same model, but fail when tested with a more sophisticated user model. This raises significant doubts about the current practice of learning and evaluating strategies with the same user model. The paper further investigates a new technique for testing and comparing strategies directly on real human-machine dialogues, thereby avoiding any evaluation bias introduced by the user model. © 2005 IEEE.
Resumo:
The product design development has increasingly become a collaborative process. Conflicts often appear in the design process due to multi-actors interactions. Therefore, a critical element of collaborative design would be conflict situations resolution. In this paper, a methodology, based on a process model, is proposed to support conflict management. This methodology deals mainly with the conflict resolution team identification and the solution impact evaluation issues. The proposed process model allows the design process traceability and the data dependencies network identification; which making it be possible to identify the conflict resolution actors as well as to evaluate the selected solution impact. Copyright © 2006 IFAC.
Resumo:
An implementation of the inverse vector Jiles-Atherton model for the solution of non-linear hysteretic finite element problems is presented. The implementation applies the fixed point method with differential reluctivity values obtained from the Jiles-Atherton model. Differential reluctivities are usually computed using numerical differentiation, which is ill-posed and amplifies small perturbations causing large sudden increases or decreases of differential reluctivity values, which may cause numerical problems. A rule based algorithm for conditioning differential reluctivity values is presented. Unwanted perturbations on the computed differential reluctivity values are eliminated or reduced with the aim to guarantee convergence. Details of the algorithm are presented together with an evaluation of the algorithm by a numerical example. The algorithm is shown to guarantee convergence, although the rate of convergence depends on the choice of algorithm parameters. © 2011 IEEE.