901 resultados para Predictive Intervals


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. EAP programs for airline pilots in companies with a well developed recovery management program are known to reduce pilot absenteeism following treatment. Given the costs and safety consequences to society, it is important to identify pilots who may be experiencing an AOD disorder to get them into treatment. ^ Hypotheses. This study investigated the predictive power of workplace absenteeism in identifying alcohol or drug disorders (AOD). The first hypothesis was that higher absenteeism in a 12-month period is associated with higher risk that an employee is experiencing AOD. The second hypothesis was that AOD treatment would reduce subsequent absence rates and the costs of replacing pilots on missed flights. ^ Methods. A case control design using eight years (time period) of monthly archival absence data (53,000 pay records) was conducted with a sample of (N = 76) employees having an AOD diagnosis (cases) matched 1:4 with (N = 304) non-diagnosed employees (controls) of the same profession and company (male commercial airline pilots). Cases and controls were matched on the variables age, rank and date of hire. Absence rate was defined as sick time hours used over the sum of the minimum guarantee pay hours annualized using the months the pilot worked for the year. Conditional logistic regression was used to determine if absence predicts employees experiencing an AOD disorder, starting 3 years prior to the cases receiving the AOD diagnosis. A repeated measures ANOVA, t tests and rate ratios (with 95% confidence intervals) were conducted to determine differences between cases and controls in absence usage for 3 years pre and 5 years post treatment. Mean replacement costs were calculated for sick leave usage 3 years pre and 5 years post treatment to estimate the cost of sick leave from the perspective of the company. ^ Results. Sick leave, as measured by absence rate, predicted the risk of being diagnosed with an AOD disorder (OR 1.10, 95% CI = 1.06, 1.15) during the 12 months prior to receiving the diagnosis. Mean absence rates for diagnosed employees increased over the three years before treatment, particularly in the year before treatment, whereas the controls’ did not (three years, x = 6.80 vs. 5.52; two years, x = 7.81 vs. 6.30, and one year, x = 11.00cases vs. 5.51controls. In the first year post treatment compared to the year prior to treatment, rate ratios indicated a significant (60%) post treatment reduction in absence rates (OR = 0.40, CI = 0.28, 0.57). Absence rates for cases remained lower than controls for the first three years after completion of treatment. Upon discharge from the FAA and company’s three year AOD monitoring program, case’s absence rates increased slightly during the fourth year (controls, x = 0.09, SD = 0.14, cases, x = 0.12, SD = 0.21). However, the following year, their mean absence rates were again below those of the controls (controls, x = 0.08, SD = 0.12, cases, x¯ = 0.06, SD = 0.07). Significant reductions in costs associated with replacing pilots calling in sick, were found to be 60% less, between the year of diagnosis for the cases and the first year after returning to work. A reduction in replacement costs continued over the next two years for the treated employees. ^ Conclusions. This research demonstrates the potential for workplace absences as an active organizational surveillance mechanism to assist managers and supervisors in identifying employees who may be experiencing or at risk of experiencing an alcohol/drug disorder. Currently, many workplaces use only performance problems and ignore the employee’s absence record. A referral to an EAP or alcohol/drug evaluation based on the employee’s absence/sick leave record as incorporated into company policy can provide another useful indicator that may also carry less stigma, thus reducing barriers to seeking help. This research also confirms two conclusions heretofore based only on cross-sectional studies: (1) higher absence rates are associated with employees experiencing an AOD disorder; (2) treatment is associated with lower costs for replacing absent pilots. Due to the uniqueness of the employee population studied (commercial airline pilots) and the organizational documentation of absence, the generalizability of this study to other professions and occupations should be considered limited. ^ Transition to Practice. The odds ratios for the relationship between absence rates and an AOD diagnosis are precise; the OR for year of diagnosis indicates the likelihood of being diagnosed increases 10% for every hour change in sick leave taken. In practice, however, a pilot uses approximately 20 hours of sick leave for one trip, because the replacement will have to be paid the guaranteed minimum of 20 hour. Thus, the rate based on hourly changes is precise but not practical. ^ To provide the organization with practical recommendations the yearly mean absence rates were used. A pilot flies on average, 90 hours a month, 1080 annually. Cases used almost twice the mean rate of sick time the year prior to diagnosis (T-1) compared to controls (cases, x = .11, controls, x = .06). Cases are expected to use on average 119 hours annually (total annual hours*mean annual absence rate), while controls will use 60 hours. The cases’ 60 hours could translate to 3 trips of 20 hours each. Management could use a standard of 80 hours or more of sick time claimed in a year as the threshold for unacceptable absence, a 25% increase over the controls (a cost to the company of approximately of $4000). At the 80-hour mark, the Chief Pilot would be able to call the pilot in for a routine check as to the nature of the pilot’s excessive absence. This management action would be based on a company standard, rather than a behavioral or performance issue. Using absence data in this fashion would make it an active surveillance mechanism. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of feminine products such as vaginal douches, tampons, and sanitary napkins are common among women. Despite the results of some studies that suggest an association between douching and bacterial vaginosis, douching remains a topic that is understudied. The possibility of an association between tampon use and infection has not been significantly investigated since the toxic shock outbreak in the 1980s. The first objective of our study was to evaluate demographic, reproductive health, and sexual behavior variables to establish an epidemiologic profile of menstruating women who reported douching and women who reported using sanitary napkins only. The second objective of our study was to evaluate whether the behaviors of douching and using tampons were associated with an increased risk of bacterial vaginosis or trichomonas. We analyzed these factors, using logistic regression, among the 3,174 women from the NHANES cross sectional data from 2001-2004, who met the inclusion criteria determined for our study. We established an epidemiologic profile for women who had the highest frequency of douching reported as women who were age 36-49, had a high school education or GED, black race, not taking oral contraceptives, reported vaginal symptoms in the last month, two or more sexual partners in the last year, or tested positive for bacterial vaginosis or trichomonas. The profile for those who had the highest frequency of exclusive sanitary napkin use included women with less than a high school education, married women, women classified as black or "other" in race, and women who were not on oral contraceptives. While we were able to establish a significant increase in the odds of douching among women who tested positive for bacterial vaginosis or trichomonas, we did not find any significant difference in the odds of exclusive napkin use and testing negative for bacterial vaginosis or trichomonas.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objectives of this study were to identify and measure the average outcomes of the Open Door Mission's nine-month community-based substance abuse treatment program, identify predictors of successful outcomes, and make recommendations to the Open Door Mission for improving its treatment program.^ The Mission's program is exclusive to adult men who have limited financial resources: most of which were homeless or dependent on parents or other family members for basic living needs. Many, but not all, of these men are either chemically dependent or have a history of substance abuse.^ This study tracked a cohort of the Mission's graduates throughout this one-year study and identified various indicators of success at short-term intervals, which may be predictive of longer-term outcomes. We tracked various levels of 12-step program involvement, as well as other social and spiritual activities, such as church affiliation and recovery support.^ Twenty-four of the 66 subjects, or 36% met the Mission's requirements for success. Specific to this success criteria; Fifty-four, or 82% reported affiliation with a home church; Twenty-six, or 39% reported full-time employment; Sixty-one, or 92% did not report or were not identified as having any post-treatment arrests or incarceration, and; Forty, or 61% reported continuous abstinence from both drugs and alcohol.^ Five research-based hypotheses were developed and tested. The primary analysis tool was the web-based non-parametric dependency modeling tool, B-Course, which revealed some strong associations with certain variables, and helped the researchers generate and test several data-driven hypotheses. Full-time employment is the greatest predictor of abstinence: 95% of those who reported full time employment also reported continuous post-treatment abstinence, while 50% of those working part-time were abstinent and 29% of those with no employment were abstinent. Working with a 12-step sponsor, attending aftercare, and service with others were identified as predictors of abstinence.^ This study demonstrates that associations with abstinence and the ODM success criteria are not simply based on one social or behavioral factor. Rather, these relationships are interdependent, and show that abstinence is achieved and maintained through a combination of several 12-step recovery activities. This study used a simple assessment methodology, which demonstrated strong associations across variables and outcomes, which have practical applicability to the Open Door Mission for improving its treatment program. By leveraging the predictive capability of the various success determination methodologies discussed and developed throughout this study, we can identify accurate outcomes with both validity and reliability. This assessment instrument can also be used as an intervention that, if operationalized to the Mission’s clients during the primary treatment program, may measurably improve the effectiveness and outcomes of the Open Door Mission.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple Endocrine Neoplasia type 1 (MEN1) is a hereditary cancer syndrome characterized by tumors of the endocrine system. Tumors most commonly develop in the parathyroid glands, pituitary gland, and the gastro-entero pancreatic tract. MEN1 is a highly penetrant condition and age of onset is variable. Most patients are diagnosed in early adulthood; however, rare cases of MEN1 present in early childhood. Expert consensus opinion is that predictive genetic testing should be offered at age 5 years, however there are no evidence-based studies that clearly establish that predictive genetic testing at this age would be beneficial since most symptoms do not present until later in life. This study was designed to explore attitudes about the most appropriate age for predictive genetic testing from individuals at risk of having a child with MEN1. Participants who had an MEN1 mutation were invited to complete a survey and were asked to invite their spouses to participate as well. The survey included several validated measures designed to assess participants’ attitudes about predictive testing in minors. Fifty-eight affected participants and twenty-two spouses/partners completed the survey. Most participants felt that MEN1 genetic testing was appropriate in healthy minors. Younger age and increased knowledge of MEN1 genetics and inheritance predicted genetic testing at a younger age. Additionally, participants who saw more positive than negative general outcomes from genetic testing were more likely to favor genetic testing at younger ages. Overall, participants felt genetic testing should be offered at a younger age than most adult onset conditions and most felt the appropriate time for testing was when a child could understand and participate in the testing process. Psychological concerns seemed to be the primary focus of participants who favored later ages for genetic testing, while medical benefits were more commonly cited for younger age. This exploratory study has implications for counseling patients whose children are at risk of developing MEN1 and illustrates issues that are important to patients and their spouses when considering testing in children.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Li- Fraumeni Syndrome (LFS) is a rare autosomal dominant hereditary cancer syndrome caused by mutations in the TP53 gene that predisposes individuals to a wide variety of cancers, including breast cancer, soft tissue sarcomas, osteosarcomas, brain tumors, and adrenocortical carcinomas. Individuals found to carry germline mutations in TP53 have a 90% lifetime cancer risk, with a 20% chance to develop cancer under the age of 20. Despite the significant risk of childhood cancer, predictive testing for unaffected minors at risk for LFS historically has not been recommended, largely due to the lack of available and effective screening for the types of cancers involved. A recently developed screening protocol suggests an advantage to identifying and screening children at risk for LFS and we therefore hypothesized that this alongside with the availability of new screening modalities may substantiate a shift in recommendations for predictive genetic testing in minors at risk for LFS. We aimed to describe current screening recommendations that genetic counselors provide to this population as well as explore factors that may have influenced genetic counselors attitude and practice in regards to this issue. An online survey was emailed to members of the National Society of Genetic Counselors (NSGC) and the Canadian Association of Genetic Counsellors (CAGC). Of an estimated 1000 eligible participants, 172 completed surveys that were analyzed. Genetic counselors in this study were more likely to support predictive genetic testing for this population as the minor aged (p

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing data, collected from 1st-year students enrolled in a major Health Science Community College in the south central United States, for Fall 2010, Spring 2011, Fall 2011 and Spring 2012 semesters as part of the "Online Navigational Assessment Vehicle, Intervention Guidance, and Targeting of Risks (NAVIGATOR) for Undergraduate Minority Student Success" with CPHS approval number HSC-GEN-07-0158, was used for this thesis. The Personal Background and Preparation Survey (PBPS) and a two-question risk self-assessment subscale were administered to students during their 1st-year orientation. The PBPS total risk score, risk self-assessment total and overall scores, and Under Representative Minority Student (URMS) status were recorded. The purpose of this study is to evaluate and report the predictive validity of the indicators identified above for Adverse Academic Status Events (AASE) and Nonadvancement Adverse Academic Status Events (NAASE) as well as the effectiveness of interventions targeted using the PBPS among a diverse population of health science community college students. The predictive validity of the PBPS for AASE has previously been demonstrated among health science professions and graduate students (Johnson, Johnson, Kim, & McKee, 2009a; Johnson, Johnson, McKee, & Kim, 2009b). Data will be analyzed using binary logistic regression and correlation using SPSS 19 statistical package. Independent variables will include baseline- versus intervention-year treatments, PBPS, risk self-assessment, and URMS status. The dependent variables will be binary AASE and NAASE status. ^ The PBPS was the first reliable diagnostic and prescriptive instrument to establish documented predictive validity for student Adverse Academic Status Events (AASE) among students attending health science professional schools. These results extend the documented validity for the PBPS in predicting AASE to a health science community college student population. Results further demonstrated that interventions introduced using the PBPS were followed by approximately one-third reduction in the odds of Nonadvancement Adverse Academic Status Events (NAASE), controlling for URMS status and risk self-assessment scores. These results indicate interventions introduced using the PBPS may have potential to reduce AASE or attrition among URMS and nonURMS attending health science community colleges on a broader scale; positively impacting costs, shortages, and diversity of health science professionals.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective::Describe and understand regional differences and associated multilevel factors (patient, provider and regional) to inappropriate utilization of advance imaging tests in the privately insured population of Texas. Methods: We analyzed Blue Cross Blue Shield of Texas claims dataset to study the advance imaging utilization during 2008-2010 in the PPO/PPO+ plans. We used three of CMS "Hospital Outpatient Quality Reporting" imaging efficiency measures. These included ordering MRI for low back pain without prior conservative management (OP-8) and utilization of combined with and without contrast abdominal CT (OP-10) and thorax CT (OP-11). Means and variation by hospital referral regions (HRR) in Texas were measured and a multilevel logistic regression for being a provider with high values for any the three OP measures was used in the analysis. We also analyzed OP-8 at the individual level. A multilevel logistic regression was used to identify predictive factors for having an inappropriate MRI for low back pain. Results: Mean OP-8 for Texas providers was 37.89%, OP-10 was 29.94% and OP-11 was 9.24%. Variation was higher for CT measure. And certain HRRs were consistently above the mean. Hospital providers had higher odds of high OP-8 values (OP-8: OR, 1.34; CI, 1.12-1.60) but had smaller odds of having high OP-10 and OP-11 values (OP-10: OR, 0.15; CI, 0.12-0.18; OP-11: OR, 0.43; CI, 0.34-0.53). Providers with the highest volume of imaging studies performed, were less likely to have high OP-8 measures (OP-8: OR, 0.58; CI, 0.48-0.70) but more likely to perform combined thoracic CT scans (OP-11: OR, 1.62; CI, 1.34-1.95). Males had higher odds of inappropriate MRI (OR, 1.21; CI, 1.16-1.26). Pattern of care in the six months prior to the MRI event was significantly associated with having an inappropriate MRI. Conclusion::We identified a significant variation in advance imaging utilization across Texas. Type of facility was associated with measure performance, but the associations differ according to the type of study. Last, certain individual characteristics such as gender, age and pattern of care were found to be predictors of inappropriate MRIs.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radiolarian cherts in the Tethyan realm of Jurassic age were recently interpreted as resulting from high biosiliceous productivity along upwelling zones in subequatorial paleolatitudes the locations of which were confirmed by revised paleomagnetic estimates. However, the widespread occurrence of cherts in the Eocene suggests that cherts may not always be reliable proxies of latitude and upwelling zones. In a new survey of the global spatio-temporal distribution of Cenozoic cherts in Deep Sea Drilling Project (DSDP) and Ocean Drilling Program (ODP) sediment cores, we found that cherts occur most frequently in the Paleocene and early Eocene, with a peak in occurrences at ~50 Ma that is coincident with the time of highest bottom water temperatures of the early Eocene climatic optimum (EECO) when the global ocean was presumably characterized by reduced upwelling efficiency and biosiliceous productivity. Cherts occur less commonly during the subsequent Eocene global cooling trend. Primary paleoclimatic factors rather than secondary diagenetic processes seem therefore to control chert formation. This timing of peak Eocene chert occurrence, which is supported by detailed stratigraphic correlations, contradicts currently accepted models that involve an initial loading of large amounts of dissolved silica from enhanced weathering and/or volcanism in a supposedly sluggish ocean of the EECO, followed during the subsequent middle Eocene global cooling by more vigorous oceanic circulation and consequent upwelling that made this silica reservoir available for enhanced biosilicification, with the formation of chert as a result of biosilica transformation during diagenesis. Instead, we suggest that basin-basin fractionation by deep-sea circulation could have raised the concentration of EECO dissolved silica especially in the North Atlantic, where an alternative mode of silica burial involving widespread direct precipitation and/or absorption of silica by clay minerals could have been operative in order to maintain balance between silica input and output during the upwelling-deficient conditions of the EECO. Cherts may therefore not always be proxies of biosiliceous productivity associated with latitudinally focused upwelling zones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of the ecosystem approach and models for the management of ocean marine resources requires easy access to standard validated datasets of historical catch data for the main exploited species. They are used to measure the impact of biomass removal by fisheries and to evaluate the models skills, while the use of standard dataset facilitates models inter-comparison. North Atlantic albacore tuna is exploited all year round by longline and in summer and autumn by surface fisheries and fishery statistics compiled by the International Commission for the Conservation of Atlantic Tunas (ICCAT). Catch and effort with geographical coordinates at monthly spatial resolution of 1° or 5° squares were extracted for this species with a careful definition of fisheries and data screening. In total, thirteen fisheries were defined for the period 1956-2010, with fishing gears longline, troll, mid-water trawl and bait fishing. However, the spatialized catch effort data available in ICCAT database represent a fraction of the entire total catch. Length frequencies of catch were also extracted according to the definition of fisheries above for the period 1956-2010 with a quarterly temporal resolution and spatial resolutions varying from 1°x 1° to 10°x 20°. The resolution used to measure the fish also varies with size-bins of 1, 2 or 5 cm (Fork Length). The screening of data allowed detecting inconsistencies with a relatively large number of samples larger than 150 cm while all studies on the growth of albacore suggest that fish rarely grow up over 130 cm. Therefore, a threshold value of 130 cm has been arbitrarily fixed and all length frequency data above this value removed from the original data set.