916 resultados para Compactification and String Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Therapists having a positive influence on patients is a common view of psychotherapy. There is, however, an influence of patients on therapists too, which has received less attention. The more restricted and rigid patients are, the more limited the interpersonal behavior of others with which the get along well. In line with social psychological, interpersonal and clinical models they try to bring the therapist into an interpersonal position which suits them well. With 60 patients, common strategies have been rated: Good mood, Positive feedback, Negative feedback, Agenda setting, Provoking a response from the therapist, Negative reports about third persons, Fait accompli, Supplication, Self-promotion, Avoidance of contents, und Emotional avoidance. The rating procedure, frequencies, and therapist reactions upon these patient strategies will be reported

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a complement to experimental and theoretical approaches, numerical modeling has become an important component to study asteroid collisions and impact processes. In the last decade, there have been significant advances in both computational resources and numerical methods. We discuss the present state-of-the-art numerical methods and material models used in "shock physics codes" to simulate impacts and collisions and give some examples of those codes. Finally, recent modeling studies are presented, focussing on the effects of various material properties and target structures on the outcome of a collision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In applied work economists often seek to relate a given response variable y to some causal parameter mu* associated with it. This parameter usually represents a summarization based on some explanatory variables of the distribution of y, such as a regression function, and treating it as a conditional expectation is central to its identification and estimation. However, the interpretation of mu* as a conditional expectation breaks down if some or all of the explanatory variables are endogenous. This is not a problem when mu* is modelled as a parametric function of explanatory variables because it is well known how instrumental variables techniques can be used to identify and estimate mu*. In contrast, handling endogenous regressors in nonparametric models, where mu* is regarded as fully unknown, presents di±cult theoretical and practical challenges. In this paper we consider an endogenous nonparametric model based on a conditional moment restriction. We investigate identification related properties of this model when the unknown function mu* belongs to a linear space. We also investigate underidentification of mu* along with the identification of its linear functionals. Several examples are provided in order to develop intuition about identification and estimation for endogenous nonparametric regression and related models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary heart disease remains the leading cause of death in the United States and increased blood cholesterol level has been found to be a major risk factor with roots in childhood. Tracking of cholesterol, i.e., the tendency to maintain a particular cholesterol level relative to the rest of the population, and variability in blood lipid levels with increase in age have implications for cholesterol screening and assessment of lipid levels in children for possible prevention of further rise to prevent adulthood heart disease. In this study the pattern of change in plasma lipids, over time, and their tracking were investigated. Also, within-person variance and retest reliability defined as the square root of within-person variance for plasma total cholesterol, HDL-cholesterol, LDL-cholesterol, and triglycerides and their relation to age, sex and body mass index among participants from age 8 to 18 years were investigated. ^ In Project HeartBeat!, 678 healthy children aged 8, 11 and 14 years at baseline were enrolled and examined at 4-monthly intervals for up to 4 years. We examined the relationship between repeated observations by Pearson's correlations. Age- and sex-specific quintiles were calculated and the probability of participants to remain in the uppermost quintile of their respective distribution was evaluated with life table methods. Plasma total cholesterol, HDL-C and LDL-C at baseline were strongly and significantly correlated with measurements at subsequent visits across the sex and age groups. Plasma triglyceride at baseline was also significantly correlated with subsequent measurements but less strongly than was the case for other plasma lipids. The probability to remain in the upper quintile was also high (60 to 70%) for plasma total cholesterol, HDL-C and LDL-C. ^ We used a mixed longitudinal, or synthetic cohort design with continuous observations from age 8 to 18 years to estimate within person variance of plasma total cholesterol, HDL-C, LDL-C and triglycerides. A total of 5809 measurements were available for both cholesterol and triglycerides. A multilevel linear model was used. Within-person variance among repeated measures over up to four years of follow-up was estimated for total cholesterol, HDL-C, LDL-C and triglycerides separately. The relationship of within-person and inter-individual variance with age, sex, and body mass index was evaluated. Likelihood ratio tests were conducted by calculating the deviation of −2log (likelihood) within the basic model and alternative models. The square root of within-person variance provided the retest reliability (within person standard deviation) for plasma total cholesterol, HDL-C, LDL-C and triglycerides. We found 13.6 percent retest reliability for plasma cholesterol, 6.1 percent for HDL-cholesterol, 11.9 percent for LDL-cholesterol and 32.4 percent for triglycerides. Retest reliability of plasma lipids was significantly related with age and body mass index. It increased with increase in body mass index and age. These findings have implications for screening guidelines, as participants in the uppermost quintile tended to maintain their status in each of the age groups during a four-year follow-up. The magnitude of within-person variability of plasma lipids influences the ability to classify children into risk categories recommended by the National Cholesterol Education Program. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maternal ingestion of high concentrations of radon-222 (Rn-222) in drinking during pregnancy may pose a significant radiation hazard to the developing embryo. The effects of ionizing radiation to the embryo and fetus have been the subject of research, analyses, and the development of a number of radiation dosimetric models for a variety of radionuclides. Currently, essentially all of the biokinetic and dosimetric models that have been developed by national and international radiation protection agencies and organizations recommend calculating the dose to the mother's uterus as a surrogate for estimating the dose to the embryo. Heretofore, the traditional radiation dosimetry models have neither considered the embryo a distinct and rapidly developing entity, the fact that it is implanted in the endometrial layer of the uterus, nor the physiological interchanges that take place between maternal and embryonic cells following the implantation of the blastocyst in the endometrium. The purpose of this research was to propose a new approach and mathematical model for calculating the absorbed radiation dose to the embryo by utilizing a semiclassical treatment of alpha particle decay and subsequent scattering of energy deposition in uterine and embryonic tissue. The new approach and model were compared and contrasted with the currently recommended biokinetic and dosimetric models for estimating the radiation dose to the embryo. The results obtained in this research demonstrate that the estimated absorbed dose for an embryo implanted in the endometrial layer of the uterus during the fifth week of embryonic development is greater than the estimated absorbed dose for an embryo implanted in the uterine muscle on the last day of the eighth week of gestation. This research provides compelling evidence that the recommended methodologies and dosimetric models of the Nuclear Regulatory Commission and International Commission on Radiological Protection employed for calculating the radiation dose to the embryo from maternal intakes of radionuclides, including maternal ingestion of Rn-222 in drinking water would result in an underestimation of dose. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many public health agencies and researchers are interested in comparing hospital outcomes, for example, morbidity, mortality, and hospitalization across areas and hospitals. However, since there is variation of rates in clinical trials among hospitals because of several biases, we are interested in controlling for the bias and assessing real differences in clinical practices. In this study, we compared the variations between hospitals in rates of severe Intraventricular Haemorrhage (IVH) infant using Frequentist statistical approach vs. Bayesian hierarchical model through simulation study. The template data set for simulation study was included the number of severe IVH infants of 24 intensive care units in Australian and New Zealand Neonatal Network from 1995 to 1997 in severe IVH rate in preterm babies. We evaluated the rates of severe IVH for 24 hospitals with two hierarchical models in Bayesian approach comparing their performances with the shrunken rates in Frequentist method. Gamma-Poisson (BGP) and Beta-Binomial (BBB) were introduced into Bayesian model and the shrunken estimator of Gamma-Poisson (FGP) hierarchical model using maximum likelihood method were calculated as Frequentist approach. To simulate data, the total number of infants in each hospital was kept and we analyzed the simulated data for both Bayesian and Frequentist models with two true parameters for severe IVH rate. One was the observed rate and the other was the expected severe IVH rate by adjusting for five predictors variables for the template data. The bias in the rate of severe IVH infant estimated by both models showed that Bayesian models gave less variable estimates than Frequentist model. We also discussed and compared the results from three models to examine the variation in rate of severe IVH by 20th centile rates and avoidable number of severe IVH cases. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Additive and multiplicative models of relative risk were used to measure the effect of cancer misclassification and DS86 random errors on lifetime risk projections in the Life Span Study (LSS) of Hiroshima and Nagasaki atomic bomb survivors. The true number of cancer deaths in each stratum of the cancer mortality cross-classification was estimated using sufficient statistics from the EM algorithm. Average survivor doses in the strata were corrected for DS86 random error ($\sigma$ = 0.45) by use of reduction factors. Poisson regression was used to model the corrected and uncorrected mortality rates with covariates for age at-time-of-bombing, age at-time-of-death and gender. Excess risks were in good agreement with risks in RERF Report 11 (Part 2) and the BEIR-V report. Bias due to DS86 random error typically ranged from $-$15% to $-$30% for both sexes, and all sites and models. The total bias, including diagnostic misclassification, of excess risk of nonleukemia for exposure to 1 Sv from age 18 to 65 under the non-constant relative projection model was $-$37.1% for males and $-$23.3% for females. Total excess risks of leukemia under the relative projection model were biased $-$27.1% for males and $-$43.4% for females. Thus, nonleukemia risks for 1 Sv from ages 18 to 85 (DRREF = 2) increased from 1.91%/Sv to 2.68%/Sv among males and from 3.23%/Sv to 4.02%/Sv among females. Leukemia excess risks increased from 0.87%/Sv to 1.10%/Sv among males and from 0.73%/Sv to 1.04%/Sv among females. Bias was dependent on the gender, site, correction method, exposure profile and projection model considered. Future studies that use LSS data for U.S. nuclear workers may be downwardly biased if lifetime risk projections are not adjusted for random and systematic errors. (Supported by U.S. NRC Grant NRC-04-091-02.) ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies have suggested that acculturation is related to diabetes prevalence and risk factors among immigrant groups in the United States (U.S.), however scant data are available to investigate this relationship among Asian Americans and Asian American subgroups. The objective of this cross-sectional study was to examine the association between length of stay in the U.S. and type 2 diabetes prevalence and its risk factors among Chinese Americans in Houston, Texas. Data were obtained from the 2004-2005 Asian-American Health Needs Assessment in Houston, Texas (N=409 Chinese Americans) for secondary analysis in this study. Diabetes prevalence and risk factors (overweight/obesity and access to medical care) were based on self-report. Descriptive statistics summarized demographic characteristics, diabetes prevalence, and reasons for not seeing a doctor. Logistic regression, using an incremental modeling approach, was used to measure the association between length of stay and diabetes prevalence and related risk factors, while adjusting for the potential confounding factors of age, gender, education level, and income level. Although the prevalence of type 2 diabetes was highest among those living in the U.S. for more than 20 years, there was no significant association between length of stay in the U.S. and diabetes prevalence among these Chinese Americans after adjustment for confounding factors. No association was found between length of stay in the U.S. and overweight/obese status among this population either, after adjusting for confounding factors, too. On the other hand, a longer length of stay was significantly associated with increased health insurance coverage in both unadjusted and adjusted models. The findings of this study suggest that length of stay in the U.S. alone may not be an indicator for diabetes risk among Chinese Americans. Future research should consider alternative models to measure acculturation (e.g., models that reflect acculturation as a multi-dimensional, not uni-dimensional process), which may more accurately depict its effect on diabetes prevalence and related risk factors.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Development of homology modeling methods will remain an area of active research. These methods aim to develop and model increasingly accurate three-dimensional structures of yet uncrystallized therapeutically relevant proteins e.g. Class A G-Protein Coupled Receptors. Incorporating protein flexibility is one way to achieve this goal. Here, I will discuss the enhancement and validation of the ligand-steered modeling, originally developed by Dr. Claudio Cavasotto, via cross modeling of the newly crystallized GPCR structures. This method uses known ligands and known experimental information to optimize relevant protein binding sites by incorporating protein flexibility. The ligand-steered models were able to model, reasonably reproduce binding sites and the co-crystallized native ligand poses of the β2 adrenergic and Adenosine 2A receptors using a single template structure. They also performed better than the choice of template, and crude models in a small scale high-throughput docking experiments and compound selectivity studies. Next, the application of this method to develop high-quality homology models of Cannabinoid Receptor 2, an emerging non-psychotic pain management target, is discussed. These models were validated by their ability to rationalize structure activity relationship data of two, inverse agonist and agonist, series of compounds. The method was also applied to improve the virtual screening performance of the β2 adrenergic crystal structure by optimizing the binding site using β2 specific compounds. These results show the feasibility of optimizing only the pharmacologically relevant protein binding sites and applicability to structure-based drug design projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surface and deepwater paleoclimate records in Irminger Sea core SO82-5 (59°N, 31°W) and Icelandic Sea core PS2644 (68°N, 22°W) exhibit large fluctuations in thermohaline circulation (THC) from 60 to 18 calendar kyr B.P., with a dominant periodicity of 1460 years from 46 to 22 calendar kyr B.P., matching the Dansgaard-Oeschger (D-O) cycles in the Greenland Ice Sheet Project 2 (GISP2) temperature record [Grootes and Stuiver, 1997, doi:10.1029/97JC00880]. During interstadials, summer sea surface temperatures (SSTsu) in the Irminger Sea averaged to 8°C, and sea surface salinities (SSS) averaged to ~36.5, recording a strong Irminger Current and Atlantic THC. During stadials, SSTsu dropped to 2°-4°C, in phase with SSS drops by ~1-2. They reveal major meltwater injections along with the East Greenland Current, which turned off the North Atlantic deepwater convection and hence the heat advection to the north, in harmony with various ocean circulation and ice models. On the basis of the IRD composition, icebergs came from Iceland, east Greenland, and perhaps Svalbard and other northern ice sheets. However, the southward drifting icebergs were initially jammed in the Denmark Strait, reaching the Irminger Sea only with a lag of 155-195 years. We also conclude that the abrupt stadial terminations, the D-O warming events, were tied to iceberg melt via abundant seasonal sea ice and brine water formation in the meltwater-covered northwestern North Atlantic. In the 1/1460-year frequency band, benthic ?18O brine water spikes led the temperature maxima above Greenland and in the Irminger Sea by as little as 95 years. Thus abundant brine formation, which was induced by seasonal freezing of large parts of the northwestern Atlantic, may have finally entrained a current of warm surface water from the subtropics and thereby triggered the sudden reactivation of the THC. In summary, the internal dynamics of the east Greenland ice sheet may have formed the ultimate pacemaker of D-O cycles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The world is changing rapidly. People today face numerous challenges in achieving a meaningful and fulfilling life. In many countries, there are enormous systemic barriers to address, such as: massive unemployment, HIV/AIDS, social disintegration, and inadequate infrastructure. One job for life is over. For many it never existed. Old metaphors and old models of career development no longer apply. New ways of thinking about careers are necessary, that take into account the context in which people are living, the reality of today's labour market, and the fact people's career-life journey contains many branching paths, barriers, and obstacles, but also allies and sources of assistance. Flexibility is important, as is keeping options open and making sure the journey is meaningful. Guidance professionals need to begin early, working with other professionals and those seeking assistance to develop attitudes that facilitate people taking charge of their own career-life paths. People need a vision for their life that will drive a purposeful approach to career-life planning and avoid floundering. Helping people achieve that direction can be most effectively accomplished when policy makers and practitioners work together to ensure that effective and accessible services are available for those who need them and when a large part of focus in on addressing the context in which marginalized people work and live.