948 resultados para semi-empirical methods
Resumo:
This paper advances a philosophically informed rationale for the broader, reflexive and practical application of arts-based methods to benefit research, practice and pedagogy. It addresses the complexity and diversity of learning and knowing, foregrounding a cohabitative position and recognition of a plurality of research approaches, tailored and responsive to context. Appreciation of art and aesthetic experience is situated in the everyday, underpinned by multi-layered exemplars of pragmatic visual-arts narrative inquiry undertaken in the third, creative and communications sectors. Discussion considers semi-guided use of arts-based methods as a conduit for topic engagement, reflection and intersubjective agreement; alongside observation and interpretation of organically employed approaches used by participants within daily norms. Techniques span handcrafted (drawing), digital (photography), hybrid (cartooning), performance dimensions (improvised installations) and music (metaphor and structure). The process of creation, the artefact/outcome produced and experiences of consummation are all significant, with specific reflexivity impacts. Exploring methodology and epistemology, both the "doing" and its interpretation are explicated to inform method selection, replication, utility, evaluation and development of cross-media skills literacy. Approaches are found engaging, accessible and empowering, with nuanced capabilities to alter relationships with phenomena, experiences and people. By building a discursive space that reduces barriers; emancipation, interaction, polyphony, letting-go and the progressive unfolding of thoughts are supported, benefiting ways of knowing, narrative (re)construction, sensory perception and capacities to act. This can also present underexplored researcher risks in respect to emotion work, self-disclosure, identity and agenda. The paper therefore elucidates complex, intricate relationships between form and content, the represented and the representation or performance, researcher and participant, and the self and other. This benefits understanding of phenomena including personal experience, sensitive issues, empowerment, identity, transition and liminality. Observations are relevant to qualitative and mixed methods researchers and a multidisciplinary audience, with explicit identification of challenges, opportunities and implications.
Resumo:
2000 Mathematics Subject Classification: 60K15, 60K20, 60G20,60J75, 60J80, 60J85, 60-08, 90B15.
Resumo:
A Bázel–2. tőkeegyezmény bevezetését követően a bankok és hitelintézetek Magyarországon is megkezdték saját belső minősítő rendszereik felépítését, melyek karbantartása és fejlesztése folyamatos feladat. A szerző arra a kérdésre keres választ, hogy lehetséges-e a csőd-előrejelző modellek előre jelző képességét növelni a hagyományos matematikai-statisztikai módszerek alkalmazásával oly módon, hogy a modellekbe a pénzügyi mutatószámok időbeli változásának mértékét is beépítjük. Az empirikus kutatási eredmények arra engednek következtetni, hogy a hazai vállalkozások pénzügyi mutatószámainak időbeli alakulása fontos információt hordoz a vállalkozás jövőbeli fizetőképességéről, mivel azok felhasználása jelentősen növeli a csődmodellek előre jelző képességét. A szerző azt is megvizsgálja, hogy javítja-e a megfigyelések szélsőségesen magas vagy alacsony értékeinek modellezés előtti korrekciója a modellek klasszifikációs teljesítményét. ______ Banks and lenders in Hungary also began, after the introduction of the Basel 2 capital agreement, to build up their internal rating systems, whose maintenance and development are a continuing task. The author explores whether it is possible to increase the predictive capacity of business-failure forecasting models by traditional mathematical-cum-statistical means in such a way that they incorporate the measure of change in the financial indicators over time. Empirical findings suggest that the temporal development of the financial indicators of firms in Hungary carries important information about future ability to pay, since the predictive capacity of bankruptcy forecasting models is greatly increased by using such indicators. The author also examines whether the classification performance of the models can be improved by correcting for extremely high or low values before modelling.
Resumo:
The purpose of this study was to investigate empirically the role of knowledge and innovation within Central and Eastern Europe’s changing economy. We applied qualitative research methods, and focused only on professional services firms within the region. The connection between knowledge and innovation as well as knowledge and competitiveness was analyzed by top managers and senior industry experts. Our findings revealed that knowledge might be a real value driver for professional services firms. These companies can significantly contribute to the development of modern economies through the dissemination of their internal best practices in knowledge management. We found three factors that might influence the effectiveness of knowledge management. These three factors are the involvement of international knowledge networks, the investments in human capital, and focus on critical resources. These issues proved to be essential to maximize the potential of knowledge and to leverage this into increased business performance.
Resumo:
The improving performance of public administration and the reform of public financing system have been on agenda in Hungary for many years, in accordance with the international trends. However, governments have not expected and supported creating of a performance-oriented public administration in a comprehensive and explicit way. Nevertheless, there are bottom-up initiatives at organizational level, which target performance-oriented organizational function. The research focuses on organizations of central public administration where the successful application of performance management methods is most likely based on the international literature. These are the so called agency-type organizations, which are in Hungary called autonomous state-administration organizations independent of the Government (e.g. Hungarian Competition Authority), government bureaus (e.g. Hungarian Central Statistical Office), and central offices subordinated to the government (either the cabinet or a ministry) (e.g. Hungarian Meteorological Service). The studied agencies are legally independent organizations with managerial autonomy based on public law. The purpose of this study is to get an overview on organizational level performance management tools applied by Hungarian agencies, and to reveal the reasons and drivers of the application of these tools. The empirical research is based on a mixed methods approach which combines both quantitative methods and qualitative procedures. The first – quantitative – phase of the author’s research was content analysis of homepages of the studied organizations. As a results she got information about all agencies and their practice related to some performance management tools. The second – qualitative – phase was based on semi-structured face-to-face interviews with some senior managers of agencies. The author selected the interviewees based on the results of the first phase, the relatively strong performance orientation was an important selection criteria.
Resumo:
The financial community is well aware that continued underfunding of state and local government pension plans poses many public policy and fiduciary management concerns. However, a well-defined theoretical rationale has not been developed to explain why and how public sector pension plans underfund. This study uses three methods: a survey of national pension experts, an incomplete covariance panel method, and field interviews.^ A survey of national public sector pension experts was conducted to provide a conceptual framework by which underfunding could be evaluated. Experts suggest that plan design, fiscal stress, and political culture factors impact underfunding. However, experts do not agree with previous research findings that unions actively pursue underfunding to secure current wage increases.^ Within the conceptual framework and determinants identified by experts, several empirical regularities are documented for the first time. Analysis of 173 local government pension plans, observed from 1987 to 1992, was conducted. Findings indicate that underfunding occurs in plans that have lower retirement ages, increased costs due to benefit enhancements, when the sponsor faces current year operating deficits, or when a local government relies heavily on inelastic revenue sources. Results also suggest that elected officials artificially inflate interest rate assumptions to reduce current pension costs, consequently shifting these costs to future generations. In concurrence with some experts there is no data to support the assumption that highly unionized employees secure more funding than less unionized employees.^ Empirical results provide satisfactory but not overwhelming statistical power, and only minor predictive capacity. To further explore why underfunding occurs, field interviews were carried out with 62 local government officials. Practitioners indicated that perceived fiscal stress, the willingness of policymakers to advance funding, bargaining strategies used by union officials, apathy by employees and retirees, pension board composition, and the level of influence by internal pension experts has an impact on funding outcomes.^ A pension funding process model was posited by triangulating the expert survey, empirical findings, and field survey results. The funding process model should help shape and refine our theoretical knowledge of state and local government pension underfunding in the future. ^
Resumo:
The purpose of this research was to compare the delivery methods as practiced by higher education faculty teaching distance courses with recommended or emerging standard instructional delivery methods for distance education. Previous research shows that traditional-type instructional strategies have been used in distance education and that there has been no training to distance teach. Secondary data, however, appear to suggest emerging practices which could be pooled toward the development of standards. This is a qualitative study based on the constant comparative analysis approach of grounded theory.^ Participants (N = 5) of this study were full-time faculty teaching distance education courses. The observation method used was unobtrusive content analysis of videotaped instruction. Triangulation of data was accomplished through one-on-one in-depth interviews and from literature review. Due to the addition of non-media content being analyzed, a special time-sampling technique was designed by the researcher--influenced by content analyst theories of media-related data--to sample portions of the videotape instruction that were observed and counted. A standardized interview guide was used to collect data from in-depth interviews. Coding was done based on categories drawn from review of literature, and from Cranton and Weston's (1989) typology of instructional strategies. The data were observed, counted, tabulated, analyzed, and interpreted solely by the researcher. It should be noted however, that systematic and rigorous data collection and analysis led to credible data.^ The findings of this study supported the proposition that there are no standard instructional practices for distance teaching. Further, the findings revealed that of the emerging practices suggested by proponents and by faculty who teach distance education courses, few were practiced even minimally. A noted example was the use of lecture and questioning. Questioning, as a teaching tool was used a great deal, with students at the originating site but not with distance students. Lectures were given, but were mostly conducted in traditional fashion--long in duration and with no interactive component.^ It can be concluded from the findings that while there are no standard practices for instructional delivery for distance education, there appears to be sufficient information from secondary and empirical data to initiate some standard instructional practices. Therefore, grounded in this research data is the theory that the way to arrive at some instructional delivery standards for televised distance education is a pooling of the tacitly agreed-upon emerging practices by proponents and practicing instructors. Implicit in this theory is a need for experimental research so that these emerging practices can be tested, tried, and proven, ultimately resulting in formal standards for instructional delivery in television education. ^
Resumo:
Accurate knowledge of the time since death, or postmortem interval (PMI), has enormous legal, criminological, and psychological impact. In this study, an investigation was made to determine whether the relationship between the degradation of the human cardiac structure protein Cardiac Troponin T and PMI could be used as an indicator of time since death, thus providing a rapid, high resolution, sensitive, and automated methodology for the determination of PMI. ^ The use of Cardiac Troponin T (cTnT), a protein found in heart tissue, as a selective marker for cardiac muscle damage has shown great promise in the determination of PMI. An optimized conventional immunoassay method was developed to quantify intact and fragmented cTnT. A small sample of cardiac tissue, which is less affected than other tissues by external factors, was taken, homogenized, extracted with magnetic microparticles, separated by SDS-PAGE, and visualized with Western blot by probing with monoclonal antibody against cTnT. This step was followed by labeling and available scanners. This conventional immunoassay provides a proper detection and quantitation of cTnT protein in cardiac tissue as a complex matrix; however, this method does not provide the analyst with immediate results. Therefore, a competitive separation method using capillary electrophoresis with laser-induced fluorescence (CE-LIF) was developed to study the interaction between human cTnT protein and monoclonal anti-TroponinT antibody. ^ Analysis of the results revealed a linear relationship between the percent of degraded cTnT and the log of the PMI, indicating that intact cTnT could be detected in human heart tissue up to 10 days postmortem at room temperature and beyond two weeks at 4C. The data presented demonstrates that this technique can provide an extended time range during which PMI can be more accurately estimated as compared to currently used methods. The data demonstrates that this technique represents a major advance in time of death determination through a fast and reliable, semi-quantitative measurement of a biochemical marker from an organ protected from outside factors. ^
Resumo:
Smokeless powder additives are usually detected by their extraction from post-blast residues or unburned powder particles followed by analysis using chromatographic techniques. This work presents the first comprehensive study of the detection of the volatile and semi-volatile additives of smokeless powders using solid phase microextraction (SPME) as a sampling and pre-concentration technique. Seventy smokeless powders were studied using laboratory based chromatography techniques and a field deployable ion mobility spectrometer (IMS). The detection of diphenylamine, ethyl and methyl centralite, 2,4-dinitrotoluene, diethyl and dibutyl phthalate by IMS to associate the presence of these compounds to smokeless powders is also reported for the first time. A previously reported SPME-IMS analytical approach facilitates rapid sub-nanogram detection of the vapor phase components of smokeless powders. A mass calibration procedure for the analytical techniques used in this study was developed. Precise and accurate mass delivery of analytes in picoliter volumes was achieved using a drop-on-demand inkjet printing method. Absolute mass detection limits determined using this method for the various analytes of interest ranged between 0.03–0.8 ng for the GC-MS and between 0.03–2 ng for the IMS. Mass response graphs generated for different detection techniques help in the determination of mass extracted from the headspace of each smokeless powder. The analyte mass present in the vapor phase was sufficient for a SPME fiber to extract most analytes at amounts above the detection limits of both chromatographic techniques and the ion mobility spectrometer. Analysis of the large number of smokeless powders revealed that diphenylamine was present in the headspace of 96% of the powders. Ethyl centralite was detected in 47% of the powders and 8% of the powders had methyl centralite available for detection from the headspace sampling of the powders by SPME. Nitroglycerin was the dominant peak present in the headspace of the double-based powders. 2,4-dinitrotoluene which is another important headspace component was detected in 44% of the powders. The powders therefore have more than one headspace component and the detection of a combination of these compounds is achievable by SPME-IMS leading to an association to the presence of smokeless powders.
Sales tax enforcement: An empirical analysis of compliance enforcement methodologies and pathologies
Resumo:
Most research on tax evasion has focused on the income tax. Sales tax evasion has been largely ignored and dismissed as immaterial. This paper explored the differences between income tax and sales tax evasion and demonstrated that sales tax enforcement is deserving of and requires the use of different tools to achieve compliance. Specifically, the major enforcement problem with sales tax is not evasion: it is theft perpetrated by companies that act as collection agents for the state. Companies engage in a principal-agent relationship with the state and many retain funds collected as an agent of the state for private use. As such, the act of sales tax theft bears more resemblance to embezzlement than to income tax evasion. It has long been assumed that the sales tax is nearly evasion free, and state revenue departments report voluntary compliance in a manner that perpetuates this myth. Current sales tax compliance enforcement methodologies are similar in form to income tax compliance enforcement methodologies and are based largely on trust. The primary focus is on delinquent filers with a very small percentage of businesses subject to audit. As a result, there is a very large group of noncompliant businesses who file on time and fly below the radar while stealing millions of taxpayer dollars. ^ The author utilized a variety of statistical methods with actual field data derived from operations of the Southern Region Criminal Investigations Unit of the Florida Department of Revenue to evaluate current and proposed sales tax compliance enforcement methodologies in a quasi-experimental, time series research design and to set forth a typology of sales tax evaders. This study showed that current estimates of voluntary compliance in sales tax systems are seriously and significantly overstated and that current enforcement methodologies are inadequate to identify the majority of violators and enforce compliance. Sales tax evasion is modeled using the theory of planned behavior and Cressey’s fraud triangle and it is demonstrated that proactive enforcement activities, characterized by substantial contact with non-delinquent taxpayers, results in superior ability to identify noncompliance and provides a structure through which noncompliant businesses can be rehabilitated.^
Resumo:
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Resumo:
An increasing number of students are selecting for-profit universities to pursue their education (Snyder, Tan & Hoffman, 2006). Despite this trend, little empirical research attention has focused on these institutions, and the literature that exists has been classified as rudimentary in nature (Tierney & Hentschke, 2007). The purpose of this study was to investigate the factors that differentiated students who persisted beyond the first session at a for-profit university. A mixed methods research design consisting of three strands was utilized. Utilizing the College Student Inventory, student’s self-reported perceptions of what their college experience would be like was collected during strand 1. The second strand of the study utilized a survey design focusing on the beliefs that guided participants’ decisions to attend college. Discriminant analysis was utilized to determine what factors differentiated students who persisted from those who did not. A purposeful sample and semi-structured interview guide was used during the third strand. Data from this strand were analyzed thematically. Students’ self-reported dropout proneness, predicted academic difficulty, attitudes toward educators, sense of financial security, verbal confidence, gender and number of hours worked while enrolled in school differentiated students who persisted in their studies from those who dropped out. Several themes emerged from the interview data collected. Participants noted that financial concerns, how they would balance the demands of college with the demands of their lives, and a lack of knowledge about how colleges operate were barriers to persistence faced by students. College staff and faculty support were reported to be the most significant supports reported by those interviewed. Implications for future research studies and practice are included in this study.
Resumo:
The purpose of this study was to examine the relationship between the structure of jobs and burnout, and to assess to what extent, if any this relationship was moderated by individual coping methods. This study was supported by the Karasek's (1998) Job Demand-Control-Support theory of work stress as well as Maslach and Leiter's (1993) theory of burnout. Coping was examined as a moderator based on the conceptualization of Lazarus and Folkman (1984). ^ Two overall overarching questions framed this study: (a) what is the relationship between job structure, as operationalized by job title, and burnout across different occupations in support services in a large municipal school district? and (b) To what extent do individual differences in coping methods moderate this relationship? ^ This study was a cross-sectional study of county public school bus drivers, bus aides, mechanics, and clerical workers (N = 253) at three bus depot locations within the same district using validated survey instruments for data collection. Hypotheses were tested using simultaneous regression analyses. ^ Findings indicated that there were statistically significant and relevant relationships among the variables of interest; job demands, job control, burnout, and ways of coping. There was a relationship between job title and physical job demands. There was no evidence to support a relationship between job title and psychological demands. Furthermore, there was a relationship between physical demands, emotional exhaustion and personal accomplishment; key indicators of burnout. ^ Results showed significant correlations between individual ways of coping as a moderator between job structure, operationalized by job title, and individual employee burnout adding empirical evidence to the occupational stress literature. Based on the findings, there are implications for theory, research, and practice. For theory and research, the findings suggest the importance of incorporating transactional models in the study of occupational stress. In the area of practice, the findings highlight the importance of enriching jobs, increasing job control, and providing individual-level training related to stress reduction.^
Resumo:
Smokeless powder additives are usually detected by their extraction from post-blast residues or unburned powder particles followed by analysis using chromatographic techniques. This work presents the first comprehensive study of the detection of the volatile and semi-volatile additives of smokeless powders using solid phase microextraction (SPME) as a sampling and pre-concentration technique. Seventy smokeless powders were studied using laboratory based chromatography techniques and a field deployable ion mobility spectrometer (IMS). The detection of diphenylamine, ethyl and methyl centralite, 2,4-dinitrotoluene, diethyl and dibutyl phthalate by IMS to associate the presence of these compounds to smokeless powders is also reported for the first time. A previously reported SPME-IMS analytical approach facilitates rapid sub-nanogram detection of the vapor phase components of smokeless powders. A mass calibration procedure for the analytical techniques used in this study was developed. Precise and accurate mass delivery of analytes in picoliter volumes was achieved using a drop-on-demand inkjet printing method. Absolute mass detection limits determined using this method for the various analytes of interest ranged between 0.03 - 0.8 ng for the GC-MS and between 0.03 - 2 ng for the IMS. Mass response graphs generated for different detection techniques help in the determination of mass extracted from the headspace of each smokeless powder. The analyte mass present in the vapor phase was sufficient for a SPME fiber to extract most analytes at amounts above the detection limits of both chromatographic techniques and the ion mobility spectrometer. Analysis of the large number of smokeless powders revealed that diphenylamine was present in the headspace of 96% of the powders. Ethyl centralite was detected in 47% of the powders and 8% of the powders had methyl centralite available for detection from the headspace sampling of the powders by SPME. Nitroglycerin was the dominant peak present in the headspace of the double-based powders. 2,4-dinitrotoluene which is another important headspace component was detected in 44% of the powders. The powders therefore have more than one headspace component and the detection of a combination of these compounds is achievable by SPME-IMS leading to an association to the presence of smokeless powders.
Resumo:
In longitudinal data analysis, our primary interest is in the regression parameters for the marginal expectations of the longitudinal responses; the longitudinal correlation parameters are of secondary interest. The joint likelihood function for longitudinal data is challenging, particularly for correlated discrete outcome data. Marginal modeling approaches such as generalized estimating equations (GEEs) have received much attention in the context of longitudinal regression. These methods are based on the estimates of the first two moments of the data and the working correlation structure. The confidence regions and hypothesis tests are based on the asymptotic normality. The methods are sensitive to misspecification of the variance function and the working correlation structure. Because of such misspecifications, the estimates can be inefficient and inconsistent, and inference may give incorrect results. To overcome this problem, we propose an empirical likelihood (EL) procedure based on a set of estimating equations for the parameter of interest and discuss its characteristics and asymptotic properties. We also provide an algorithm based on EL principles for the estimation of the regression parameters and the construction of a confidence region for the parameter of interest. We extend our approach to variable selection for highdimensional longitudinal data with many covariates. In this situation it is necessary to identify a submodel that adequately represents the data. Including redundant variables may impact the model’s accuracy and efficiency for inference. We propose a penalized empirical likelihood (PEL) variable selection based on GEEs; the variable selection and the estimation of the coefficients are carried out simultaneously. We discuss its characteristics and asymptotic properties, and present an algorithm for optimizing PEL. Simulation studies show that when the model assumptions are correct, our method performs as well as existing methods, and when the model is misspecified, it has clear advantages. We have applied the method to two case examples.