911 resultados para GNSS, Ambiguity resolution, Regularization, Ill-posed problem, Success probability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sensitivity of the tropics to climate change, particularly the amplitude of glacial-to-interglacial changes in sea surface temperature (SST), is one of the great controversies in paleoclimatology. Here we reassess faunal estimates of ice age SSTs, focusing on the problem of no-analog planktonic foraminiferal assemblages in the equatorial oceans that confounds both classical transfer function and modern analog methods. A new calibration strategy developed here, which uses past variability of species to define robust faunal assemblages, solves the no-analog problem and reveals ice age cooling of 5° to 6°C in the equatorial current systems of the Atlantic and eastern Pacific Oceans. Classical transfer functions underestimated temperature changes in some areas of the tropical oceans because core-top assemblages misrepresented the ice age faunal assemblages. Our finding is consistent with some geochemical estimates and model predictions of greater ice age cooling in the tropics than was inferred by Climate: Long-Range Investigation, Mapping, and Prediction (CLIMAP) [1981] and thus may help to resolve a long-standing controversy. Our new foraminiferal transfer function suggests that such cooling was limited to the equatorial current systems, however, and supports CLIMAP's inference of stability of the subtropical gyre centers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super­ resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the summary of the key objectives, instrumentation and logistic details, goals, and initial scientific findings of the European Marie Curie Action SAPUSS project carried out in the western Mediterranean Basin (WMB) during September-October in autumn 2010. The key SAPUSS objective is to deduce aerosol source characteristics and to understand the atmospheric processes responsible for their generations and transformations - both horizontally and vertically in the Mediterranean urban environment. In order to achieve so, the unique approach of SAPUSS is the concurrent measurements of aerosols with multiple techniques occurring simultaneously in six monitoring sites around the city of Barcelona (NE Spain): a main road traffic site, two urban background sites, a regional background site and two urban tower sites (150 m and 545 m above sea level, 150 m and 80 m above ground, respectively). SAPUSS allows us to advance our knowledge sensibly of the atmospheric chemistry and physics of the urban Mediterranean environment. This is well achieved only because of both the three dimensional spatial scale and the high sampling time resolution used. During SAPUSS different meteorological regimes were encountered, including warm Saharan, cold Atlantic, wet European and stagnant regional ones. The different meteorology of such regimes is herein described. Additionally, we report the trends of the parameters regulated by air quality purposes (both gaseous and aerosol mass concentrations); and we also compare the six monitoring sites. High levels of traffic-related gaseous pollutants were measured at the urban ground level monitoring sites, whereas layers of tropospheric ozone were recorded at tower levels. Particularly, tower level night-time average ozone concentrations (80 +/- 25 mu g m(-3)) were up to double compared to ground level ones. The examination of the vertical profiles clearly shows the predominant influence of NOx on ozone concentrations, and a source of ozone aloft. Analysis of the particulate matter (PM) mass concentrations shows an enhancement of coarse particles (PM2.5-10) at the urban ground level (+64 %, average 11.7 mu g m(-3)) but of fine ones (PM1) at urban tower level (+28 %, average 14.4 mu g m(-3)). These results show complex dynamics of the size-resolved PM mass at both horizontal and vertical levels of the study area. Preliminary modelling findings reveal an underestimation of the fine accumulation aerosols. In summary, this paper lays the foundation of SAPUSS, an integrated study of relevance to many other similar urban Mediterranean coastal environment sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract

The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.

This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.

I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.

Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.

II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.

The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.

In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Testing for two-sample differences is challenging when the differences are local and only involve a small portion of the data. To solve this problem, we apply a multi- resolution scanning framework that performs dependent local tests on subsets of the sample space. We use a nested dyadic partition of the sample space to get a collection of windows and test for sample differences within each window. We put a joint prior on the states of local hypotheses that allows both vertical and horizontal message passing among the partition tree to reflect the spatial dependency features among windows. This information passing framework is critical to detect local sample differences. We use both the loopy belief propagation algorithm and MCMC to get the posterior null probability on each window. These probabilities are then used to report sample differences based on decision procedures. Simulation studies are conducted to illustrate the performance. Multiple testing adjustment and convergence of the algorithms are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis details the top-down fabrication of nanostructures on Si and Ge substrates by electron beam lithography (EBL). Various polymeric resist materials were used to create nanopatterns by EBL and Chapter 1 discusses the development characteristics of these resists. Chapter 3 describes the processing parameters, resolution and topographical and structural changes of a new EBL resist known as ‘SML’. A comparison between SML and the standard resists PMMA and ZEP520A was undertaken to determine the suitability of SML as an EBL resist. It was established that SML is capable of high-resolution patterning and showed good pattern transfer capabilities. Germanium is a desirable material for use in microelectronic applications due to a number of superior qualities over silicon. EBL patterning of Ge with high-resolution hydrogen silsesquioxane (HSQ) resist is however difficult due to the presence of native surface oxides. Thus, to combat this problem a new technique for passivating Ge surfaces prior to EBL processes is detailed in Chapter 4. The surface passivation was carried out using simple acids like citric acid and acetic acid. The acids were gentle on the surface and enabled the formation of high-resolution arrays of Ge nanowires using HSQ resist. Chapter 5 details the directed self-assembly (DSA) of block copolymers (BCPs) on EBL patterned Si and, for the very first time, Ge surfaces. DSA of BCPs on template substrates is a promising technology for high volume and cost effective nanofabrication. The BCP employed for this study was poly (styrene-b-ethylene oxide) and the substrates were pre-defined by HSQ templates produced by EBL. The DSA technique resulted into pattern rectification (ordering in BCP) and in pattern multiplication within smaller areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Healthcare worldwide needs translation of basic ideas from engineering into the clinic. Consequently, there is increasing demand for graduates equipped with the knowledge and skills to apply interdisciplinary medicine/engineering approaches to the development of novel solutions for healthcare. The literature provides little guidance regarding barriers to, and facilitators of, effective interdisciplinary learning for engineering and medical students in a team-based project context. Methods: A quantitative survey was distributed to engineering and medical students and staff in two universities, one in Ireland and one in Belgium, to chart knowledge and practice in interdisciplinary learning and teaching, and of the teaching of innovation. Results: We report important differences for staff and students between the disciplines regarding attitudes towards, and perceptions of, the relevance of interdisciplinary learning opportunities, and the role of creativity and innovation. There was agreement across groups concerning preferred learning, instructional styles, and module content. Medical students showed greater resistance to the use of structured creativity tools and interdisciplinary teams. Conclusions: The results of this international survey will help to define the optimal learning conditions under which undergraduate engineering and medicine students can learn to consider the diverse factors which determine the success or failure of a healthcare engineering solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past several years, U.S. colleges and universities have faced increased pressure to improve retention and graduation rates. At the same time, educational institutions have placed a greater emphasis on the importance of enrolling more students in STEM (science, technology, engineering and mathematics) programs and producing more STEM graduates. The resulting problem faced by educators involves finding new ways to support the success of STEM majors, regardless of their pre-college academic preparation. The purpose of my research study involved utilizing first-year STEM majors’ math SAT scores, unweighted high school GPA, math placement test scores, and the highest level of math taken in high school to develop models for predicting those who were likely to pass their first math and science courses. In doing so, the study aimed to provide a strategy to address the challenge of improving the passing rates of those first-year students attempting STEM-related courses. The study sample included 1018 first-year STEM majors who had entered the same large, public, urban, Hispanic-serving, research university in the Southeastern U.S. between 2010 and 2012. The research design involved the use of hierarchical logistic regression to determine the significance of utilizing the four independent variables to develop models for predicting success in math and science. The resulting data indicated that the overall model of predictors (which included all four predictor variables) was statistically significant for predicting those students who passed their first math course and for predicting those students who passed their first science course. Individually, all four predictor variables were found to be statistically significant for predicting those who had passed math, with the unweighted high school GPA and the highest math taken in high school accounting for the largest amount of unique variance. Those two variables also improved the regression model’s percentage of correctly predicting that dependent variable. The only variable that was found to be statistically significant for predicting those who had passed science was the students’ unweighted high school GPA. Overall, the results of my study have been offered as my contribution to the literature on predicting first-year student success, especially within the STEM disciplines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For a structural engineer, effective communication and interaction with architects cannot be underestimated as a key skill to success throughout their professional career. Structural engineers and architects have to share a common language and understanding of each other in order to achieve the most desirable architectural and structural designs. This interaction and engagement develops during their professional career but needs to be nurtured during their undergraduate studies. The objective of this paper is to present the strategies employed to engage higher order thinking in structural engineering students in order to help them solve complex problem-based learning (PBL) design scenarios presented by architecture students. The strategies employed were applied in the experimental setting of an undergraduate module in structural engineering at Queen’s University Belfast in the UK. The strategies employed were active learning to engage with content knowledge, the use of physical conceptual structural models to reinforce key concepts and finally, reinforcing the need for hand sketching of ideas to promote higher order problem-solving. The strategies employed were evaluated through student survey, student feedback and module facilitator (this author) reflection. The strategies were qualitatively perceived by the tutor and quantitatively evaluated by students in a cross-sectional study to help interaction with the architecture students, aid interdisciplinary learning and help students creatively solve problems (through higher order thinking). The students clearly enjoyed this module and in particular interacting with structural engineering tutors and students from another discipline

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of reliable methods for optimised energy storage and generation is one of the most imminent challenges in modern power systems. In this paper an adaptive approach to load leveling problem using novel dynamic models based on the Volterra integral equations of the first kind with piecewise continuous kernels. These integral equations efficiently solve such inverse problem taking into account both the time dependent efficiencies and the availability of generation/storage of each energy storage technology. In this analysis a direct numerical method is employed to find the least-cost dispatch of available storages. The proposed collocation type numerical method has second order accuracy and enjoys self-regularization properties, which is associated with confidence levels of system demand. This adaptive approach is suitable for energy storage optimisation in real time. The efficiency of the proposed methodology is demonstrated on the Single Electricity Market of Republic of Ireland and Northern Ireland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work applies a hybrid approach in solving the university curriculum-based course timetabling problem as presented as part of the 2nd International Timetabling Competition 2007 (ITC2007). The core of the hybrid approach is based on an artificial bee colony algorithm. Past methods have applied artificial bee colony algorithms to university timetabling problems with high degrees of success. Nevertheless, there exist inefficiencies in the associated search abilities in term of exploration and exploitation. To improve the search abilities, this work introduces a hybrid approach entitled nelder-mead great deluge artificial bee colony algorithm (NMGD-ABC) where it combined additional positive elements of particle swarm optimization and great deluge algorithm. In addition, nelder-mead local search is incorporated into the great deluge algorithm to further enhance the performance of the resulting method. The proposed method is tested on curriculum-based course timetabling as presented in the ITC2007. Experimental results reveal that the proposed method is capable of producing competitive results as compared with the other approaches described in literature

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. The simulation method is explained in detail. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling. Despite this unrealistic behavior the error maybe small enough to be accepted for specific applications. These results is a starting point start for achieving better results of inverse musculoskeletal simulations from a numerical point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Academic literature has increasingly recognized the value of non-traditional higher education learning environments that emphasize action-orientated experiential learning for the study of entrepreneurship (Gibb, 2002; Jones & English, 2004). Many entrepreneurship educators have accordingly adopted approaches based on Kolb’s (1984) experiential learning cycle to develop a dynamic, holistic model of an experience-based learning process. Jones and Iredale (2010) suggested that entrepreneurship education requires experiential learning styles and creative problem solving to effectively engage students. Support has also been expressed for learning-by-doing activities in group or network contexts (Rasmussen and Sorheim, 2006), and for student-led approaches (Fiet, 2001). This study will build on previous works by exploring the use of experiential learning in an applied setting to develop entrepreneurial attitudes and traits in students. Based on the above literature, a British higher education institution (HEI) implemented a new, entrepreneurially-focused curriculum during the 2013/14 academic year designed to support and develop students’ entrepreneurial attitudes and intentions. The approach actively involved students in small scale entrepreneurship activities by providing scaffolded opportunities for students to design and enact their own entrepreneurial concepts. Students were provided with the necessary resources and training to run small entrepreneurial ventures in three different working environments. During the course of the year, three applied entrepreneurial opportunities were provided for students, increasing in complexity, length, and profitability as the year progressed. For the first undertaking, the class was divided into small groups, and each group was given a time slot and venue to run a pop-up shop in a busy commercial shopping centre. Each group of students was supported by lectures and dedicated class time for group work, while receiving a set of objectives and recommended resources. For the second venture, groups of students were given the opportunity to utilize an on-campus bar/club for an evening and were asked to organize and run a profitable event, acting as an outside promoter. Students were supported with lectures and seminars, and groups were given a £250 budget to develop, plan, and market their unique event. The final event was optional and required initiative on the part of the students. Students were given the opportunity to develop and put forward business plans to be judged by the HEI and the supporting organizations, which selected the winning plan. The authors of the winning business plan received a £2000 budget and a six-week lease to a commercial retail unit within a shopping centre to run their business. Students received additional academic support upon request from the instructor, and one of the supporting organizations provided a training course offering advice on creating a budget and a business plan. Data from students taking part in each of the events was collected, in order to ascertain the learning benefits of the experiential learning, along with the successes and difficulties they faced. These responses have been collected and analyzed and will be presented at the conference along with the instructor’s conclusions and recommendations for the use of such programs in higher educations.