66 resultados para single case Study
Resumo:
Purpose This paper aims to analyse various aspects of an academic social network: the profile of users, the reasons for its use, its perceived benefits and the use of other social media for scholarly purposes. Design/methodology/approach The authors examined the profiles of the users of an academic social network. The users were affiliated with 12 universities. The following were recorded for each user: sex, the number of documents uploaded, the number of followers, and the number of people being followed. In addition, a survey was sent to the individuals who had an email address in their profile. Findings Half of the users of the social network were academics and a third were PhD students. Social sciences scholars accounted for nearly half of all users. Academics used the service to get in touch with other scholars, disseminate research results and follow other scholars. Other widely employed social media included citation indexes, document creation, edition and sharing tools and communication tools. Users complained about the lack of support for the utilisation of these tools. Research limitations/implications The results are based on a single case study. Originality/value This study provides new insights on the impact of social media in academic contexts by analysing the user profiles and benefits of a social network service that is specifically targeted at the academic community.
Resumo:
Purpose- This paper aims to analyse various aspects of an academic social network: the profile of users, the reasons for its use, its perceived benefits and the use of other social media for scholarly purposes. Design/methodology/approach- The authors examined the profiles of the users of an academic social network. The users were affiliated with 12 universities. The following were recorded for each user: sex, the number of documents uploaded, the number of followers, and the number of people being followed. In addition, a survey was sent to the individuals who had an email address in their profile. Findings- Half of the users of the social network were academics and a third were PhD students. Social sciences scholars accounted for nearly half of all users. Academics used the service to get in touch with other scholars, disseminate research results and follow other scholars. Other widely employed social media included citation indexes, document creation, edition and sharing tools and communication tools. Users complained about the lack of support for the utilisation of these tools. Research limitations/implications- The results are based on a single case study. Originality/value- This study provides new insights on the impact of social media in academic contexts by analysing the user profiles and benefits of a social network service that is specifically targeted at the academic community.
Resumo:
The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments.
Resumo:
This case study deals with a rock face monitoring in urban areas using a Terrestrial Laser Scanner. The pilot study area is an almost vertical, fifty meter high cliff, on top of which the village of Castellfollit de la Roca is located. Rockfall activity is currently causing a retreat of the rock face, which may endanger the houses located at its edge. TLS datasets consist of high density 3-D point clouds acquired from five stations, nine times in a time span of 22 months (from March 2006 to January 2008). The change detection, i.e. rockfalls, was performed through a sequential comparison of datasets. Two types of mass movement were detected in the monitoring period: (a) detachment of single basaltic columns, with magnitudes below 1.5 m3 and (b) detachment of groups of columns, with magnitudes of 1.5 to 150 m3. Furthermore, the historical record revealed (c) the occurrence of slab failures with magnitudes higher than 150 m3. Displacements of a likely slab failure were measured, suggesting an apparent stationary stage. Even failures are clearly episodic, our results, together with the study of the historical record, enabled us to estimate a mean detachment of material from 46 to 91.5 m3 year¿1. The application of TLS considerably improved our understanding of rockfall phenomena in the study area.
Resumo:
After a rockfall event, a usual post event survey includes qualitative volume estimation, trajectory mapping and determination of departing zones. However, quantitative measurements are not usually made. Additional relevant quantitative information could be useful in determining the spatial occurrence of rockfall events and help us in quantifying their size. Seismic measurements could be suitable for detection purposes since they are non invasive methods and are relatively inexpensive. Moreover, seismic techniques could provide important information on rockfall size and location of impacts. On 14 February 2007 the Avalanche Group of the University of Barcelona obtained the seismic data generated by an artificially triggered rockfall event at the Montserrat massif (near Barcelona, Spain) carried out in order to purge a slope. Two 3 component seismic stations were deployed in the area about 200 m from the explosion point that triggered the rockfall. Seismic signals and video images were simultaneously obtained. The initial volume of the rockfall was estimated to be 75 m3 by laser scanner data analysis. After the explosion, dozens of boulders ranging from 10¿4 to 5 m3 in volume impacted on the ground at different locations. The blocks fell down onto a terrace, 120 m below the release zone. The impact generated a small continuous mass movement composed of a mixture of rocks, sand and dust that ran down the slope and impacted on the road 60 m below. Time, time-frequency evolution and particle motion analysis of the seismic records and seismic energy estimation were performed. The results are as follows: 1 ¿ A rockfall event generates seismic signals with specific characteristics in the time domain; 2 ¿ the seismic signals generated by the mass movement show a time-frequency evolution different from that of other seismogenic sources (e.g. earthquakes, explosions or a single rock impact). This feature could be used for detection purposes; 3 ¿ particle motion plot analysis shows that the procedure to locate the rock impact using two stations is feasible; 4 ¿ The feasibility and validity of seismic methods for the detection of rockfall events, their localization and size determination are comfirmed.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
Effect size indices are indispensable for carrying out meta-analyses and can also be seen as an alternative for making decisions about the effectiveness of a treatment in an individual applied study. The desirable features of the procedures for quantifying the magnitude of intervention effect include educational/clinical meaningfulness, calculus easiness, insensitivity to autocorrelation, low false alarm and low miss rates. Three effect size indices related to visual analysis are compared according to the aforementioned criteria. The comparison is made by means of data sets with known parameters: degree of serial dependence, presence or absence of general trend, changes in level and/or in slope. The percent of nonoverlapping data showed the highest discrimination between data sets with and without intervention effect. In cases when autocorrelation or trend is present, the percentage of data points exceeding the median may be a better option to quantify the effectiveness of a psychological treatment.
Resumo:
Visual inspection remains the most frequently applied method for detecting treatment effects in single-case designs. The advantages and limitations of visual inference are here discussed in relation to other procedures for assessing intervention effectiveness. The first part of the paper reviews previous research on visual analysis, paying special attention to the validation of visual analysts" decisions, inter-judge agreement, and false alarm and omission rates. The most relevant factors affecting visual inspection (i.e., effect size, autocorrelation, data variability, and analysts" expertise) are highlighted and incorporated into an empirical simulation study with the aim of providing further evidence about the reliability of visual analysis. Our results concur with previous studies that have reported the relationship between serial dependence and increased Type I rates. Participants with greater experience appeared to be more conservative and used more consistent criteria when assessing graphed data. Nonetheless, the decisions made by both professionals and students did not match sufficiently the simulated data features, and we also found low intra-judge agreement, thus suggesting that visual inspection should be complemented by other methods when assessing treatment effectiveness.
Resumo:
If single case experimental designs are to be used to establish guidelines for evidence-based interventions in clinical and educational settings, numerical values that reflect treatment effect sizes are required. The present study compares four recently developed procedures for quantifying the magnitude of intervention effect using data with known characteristics. Monte Carlo methods were used to generate AB designs data with potential confounding variables (serial dependence, linear and curvilinear trend, and heteroscedasticity between phases) and two types of treatment effect (level and slope change). The results suggest that data features are important for choosing the appropriate procedure and, thus, inspecting the graphed data visually is a necessary initial stage. In the presence of serial dependence or a change in data variability, the Nonoverlap of All Pairs (NAP) and the Slope and Level Change (SLC) were the only techniques of the four examined that performed adequately. Introducing a data correction step in NAP renders it unaffected by linear trend, as is also the case for the Percentage of Nonoverlapping Corrected Data and SLC. The performance of these techniques indicates that professionals" judgments concerning treatment effectiveness can be readily complemented by both visual and statistical analyses. A flowchart to guide selection of techniques according to the data characteristics identified by visual inspection is provided.
Resumo:
The present study focuses on single-case data analysis and specifically on two procedures for quantifying differences between baseline and treatment measurements The first technique tested is based on generalized least squares regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The comparison is carried out in the context of generated data representing a variety of patterns (i.e., independent measurements, different serial dependence underlying processes, constant or phase-specific autocorrelation and data variability, different types of trend, and slope and level change). The results suggest that the two techniques perform adequately for a wide range of conditions and researchers can use both of them with certain guarantees. The regression-based procedure offers more efficient estimates, whereas the proposed non-regression procedure is more sensitive to intervention effects. Considering current and previous findings, some tentative recommendations are offered to applied researchers in order to help choosing among the plurality of single-case data analysis techniques.
Resumo:
Increasing anthropogenic pressures urge enhanced knowledge and understanding of the current state of marine biodiversity. This baseline information is pivotal to explore present trends, detect future modifications and propose adequate management actions for marine ecosystems. Coralligenous outcrops are a highly diverse and structurally complex deep-water habitat faced with major threats in the Mediterranean Sea. Despite its ecological, aesthetic and economic value, coralligenous biodiversity patterns are still poorly understood. There is currently no single sampling method that has been demonstrated to be sufficiently representative to ensure adequate community assessment and monitoring in this habitat. Therefore, we propose a rapid non-destructive protocol for biodiversity assessment and monitoring of coralligenous outcrops providing good estimates of its structure and species composition, based on photographic sampling and the determination of presence/absence of macrobenthic species. We used an extensive photographic survey, covering several spatial scales (100s of m to 100s of km) within the NW Mediterranean and including 2 different coralligenous assemblages: Paramuricea clavata (PCA) and Corallium rubrum assemblage (CRA). This approach allowed us to determine the minimal sampling area for each assemblage (5000 cm² for PCA and 2500 cm²for CRA). In addition, we conclude that 3 replicates provide an optimal sampling effort in order to maximize the species number and to assess the main biodiversity patterns of studied assemblages in variability studies requiring replicates. We contend that the proposed sampling approach provides a valuable tool for management and conservation planning, monitoring and research programs focused on coralligenous outcrops, potentially also applicable in other benthic ecosystems
Resumo:
This case study deals with a rock face monitoring in urban areas using a Terrestrial Laser Scanner. The pilot study area is an almost vertical, fifty meter high cliff, on top of which the village of Castellfollit de la Roca is located. Rockfall activity is currently causing a retreat of the rock face, which may endanger the houses located at its edge. TLS datasets consist of high density 3-D point clouds acquired from five stations, nine times in a time span of 22 months (from March 2006 to January 2008). The change detection, i.e. rockfalls, was performed through a sequential comparison of datasets. Two types of mass movement were detected in the monitoring period: (a) detachment of single basaltic columns, with magnitudes below 1.5 m3 and (b) detachment of groups of columns, with magnitudes of 1.5 to 150 m3. Furthermore, the historical record revealed (c) the occurrence of slab failures with magnitudes higher than 150 m3. Displacements of a likely slab failure were measured, suggesting an apparent stationary stage. Even failures are clearly episodic, our results, together with the study of the historical record, enabled us to estimate a mean detachment of material from 46 to 91.5 m3 year¿1. The application of TLS considerably improved our understanding of rockfall phenomena in the study area.
Resumo:
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions regarding intervention effectiveness in single-case designs. Ordinary least squares estimation is compared to two correction techniques dealing with general trend and one eliminating autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approximate the nominal ones in presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.
Resumo:
Boundary equilibrium bifurcations in piecewise smooth discontinuous systems are characterized by the collision of an equilibrium point with the discontinuity surface. Generically, these bifurcations are of codimension one, but there are scenarios where the phenomenon can be of higher codimension. Here, the possible collision of a non-hyperbolic equilibrium with the boundary in a two-parameter framework and the nonlinear phenomena associated with such collision are considered. By dealing with planar discontinuous (Filippov) systems, some of such phenomena are pointed out through specific representative cases. A methodology for obtaining the corresponding bi-parametric bifurcation sets is developed.