918 resultados para custom
Resumo:
Purpose We sought to analyse clinical and oncological outcomes of patients after guided resection of periacetabular tumours and endoprosthetic reconstruction of the remaining defect. Methods From 1988 to 2008, we treated 56 consecutive patients (mean age 52.5 years, 41.1 % women). Patients were followed up either until death or February 2011 (mean follow up 5.5 years, range 0.1–22.5, standard deviation ± 5.3). Kaplan–Meier analysis was used to estimate survival rates. Results Disease-specific survival was 59.9 % at five years and 49.7 % at ten and 20 years, respectively. Wide resection margins were achieved in 38 patients, whereas 11 patients underwent marginal and seven intralesional resection. Survival was significantly better in patients with wide or marginal resection than in patients with intralesional resection (p = 0.022). Survival for patients with secondary tumours was significantly worse than for patients with primary tumours (p = 0.003). In 29 patients (51.8 %), at least one reoperation was necessary, resulting in a revision-free survival of 50.5 % at five years, 41.1 % at ten years and 30.6 % at 20 years. Implant survival was 77.0 % at five years, 68.6 % at ten years and 51.8 % at 20 years. A total of 35 patients (62.5 %) experienced one or more complications after surgery. Ten of 56 patients (17.9 %) experienced local recurrence after a mean of 8.9 months. The mean postoperative Musculoskeletal Tumor Society (MSTS) score was 18.1 (60.1 %). Conclusion The surgical approach assessed in this study simplifies the process of tumour resection and prosthesis implantation and leads to acceptable clinical and oncological outcomes.
Resumo:
Background Over the last two decades, Transcutaneous Bone-Anchored Prosthesis (TCBAP) has proven to be an effective alternative for prosthetic attachment for amputees, particularly for individuals unable to wear a socket. [1-17] However, the load transmitted through a typical TCBAP to the residual tibia and knee joint can be unbearable for transtibial amputees with knee arthritis. Aim A. To describe the surgical procedure combining TKR with TCBAP for the first time; and B. To present preliminary data on potential risks and benefits with assessment of clinical and functional outcomes at follow up Method We used a TCBAP connected to the tibial base plate of a Total Knee Replacement (TKR) prosthesis enabling the tibial residuum and the knee joint to act as weight sharing structures by transferring the load directly to the femur. We performed a standard hinged TKR connected to a custom made TCBAP at the first stage followed by creating a skin implant interface as a second stage. We retrospectively reviewed four cases of trans-tibial amputations presenting with knee joint arthritis. Patients were assessed clinically and functionally including standard measures of health-related quality of life, amputee mobility predictor tool, ambulation tests and actual activity level. Progress was monitored for 6-24 months. Results Clinical outcomes including adverse events show no major complications but one case of superficial infection. Functional outcomes improved for all participants as early as 6 months follow up. Discussion & Conclusion TKR and TCBAP were combined for the first time in this proof-of-concept case series. The preliminary outcomes indicated that this procedure is potentially a safe and effective alternative for this patient group despite the theoretical increase in risk of ascending infection through the skin-implant interface to the external environment. We suggest larger comparative series to further validate these results.
Resumo:
Background Over the last two decades, Transcutaneous Bone-Anchored Prosthesis (TCBAP) has proven to be an effective alternative for prosthetic attachment for above knee amputees, particularly for individuals suffering from socket interface related complications. [1-17] Amputees with a very short femoral residuum (<15 cm) are at a considerable higher risk for these complications as well as high risk of implant failure, if they underwent a typical TCBAP due to the relatively small bony-implant contact leading to a need of a novel technique. Aim A. To describe the surgical procedure combining THR with TCBAP for the first time; and B. To present preliminary data on potential risks and benefits with assessment of clinical and functional outcomes at follow up Method We used a TCBAP connected to the stem of a Total Hip Replacement (THR) prosthesis enabling the femoral residuum and the hip joint to act as weight sharing structures by transferring the load directly to the pelvis. We performed a tri-polar THR connected to a custom made TCBAP at the first stage followed by creating a skin implant interface as a second stage. We retrospectively reviewed three cases of transfemoral amputations presenting with extremely short femoral residuum. Patients were assessed clinically and functionally including standard measures of health-related quality of life, amputee mobility predictor tool, ambulation tests and actual activity level. Progress was monitored for 6-24 months. Results Clinical outcomes including adverse events show no major complications. Functional outcomes improved for all participants as early as 6 months follow up. All cases were wheelchair bound preoperatively (K0 – AMPRO) improved to walking with One stick (K3 – AMPRO) at 3 months follow up. Discussion & Conclusion THR and TCBAP were combined for the first time in this proof-of-concept case series. The preliminary outcomes indicated that this procedure is potentially a safe and effective alternative despite the theoretical increase in risk of ascending infection through the skin-implant interface to the external environment for this patient group. We suggest larger comparative series to further validate these results.
Resumo:
We report a circuit technique to measure the on-chip delay of an individual logic gate (both inverting and non-inverting) in its unmodified form using digitally reconfigurable ring oscillator (RO). Solving a system of linear equations with different configuration setting of the RO gives delay of an individual gate. Experimental results from a test chip in 65nm process node show the feasibility of measuring the delay of an individual inverter to within 1pS accuracy. Delay measurements of different nominally identical inverters in close physical proximity show variations of up to 26% indicating the large impact of local or within-die variations.
Resumo:
Many software applications extend their functionality by dynamically loading libraries into their allocated address space. However, shared libraries are also often of unknown provenance and quality and may contain accidental bugs or, in some cases, deliberately malicious code. Most sandboxing techniques which address these issues require recompilation of the libraries using custom tool chains, require significant modifications to the libraries, do not retain the benefits of single address-space programming, do not completely isolate guest code, or incur substantial performance overheads. In this paper we present LibVM, a sandboxing architecture for isolating libraries within a host application without requiring any modifications to the shared libraries themselves, while still retaining the benefits of a single address space and also introducing a system call inter-positioning layer that allows complete arbitration over a shared library’s functionality. We show how to utilize contemporary hardware virtualization support towards this end with reasonable performance overheads and, in the absence of such hardware support, our model can also be implemented using a software-based mechanism. We ensure that our implementation conforms as closely as possible to existing shared library manipulation functions, minimizing the amount of effort needed to apply such isolation to existing programs. Our experimental results show that it is easy to gain immediate benefits in scenarios where the goal is to guard the host application against unintentional programming errors when using shared libraries, as well as in more complex scenarios, where a shared library is suspected of being actively hostile. In both cases, no changes are required to the shared libraries themselves.
Resumo:
High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have implemented a Firewall with this architecture in reconflgurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results in both speed and area improvement when it is implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields. High throughput classification invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly in terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for the worst case packet size. The Firewall rule update involves only memory re-initialization in software without any hardware change.
Resumo:
Purpose To determine whether melanopsin expressing intrinsically photosensitive Retinal Ganglion Cell (ipRGC) inputs to the pupil light reflex (PLR) are affected in early age-related macular degeneration (AMD). Methods The PLR was measured in 40 participants (20 early AMD and 20 age-matched controls) using a custom-built Maxwellian-view pupillometer. Sinusoidal stimuli (0.5 Hz, 11.9 s duration, 35.6° diameter) were presented to the study eye and the consensual pupil response was measured for stimuli with high melanopsin excitation (464nm; blue) and with low melanopsin excitation (638 nm; red) that biased activation to the outer retina. Two melanopsin PLR metrics were quantified: the Phase Amplitude Percentage (PAP) during the sinusoidal stimulus presentation and the Post-Illumination Pupil Response (PIPR). The PLR during stimulus presentation was analyzed using latency to constriction, transient pupil response and maximum pupil constriction metrics. Diagnostic accuracy was evaluated using receiver operating characteristic (ROC) curves. Results The blue PIPR was significantly less sustained in the early AMD group (p<0.001). The red PIPR was not significantly different between groups (p>0.05). The PAP and blue stimulus constriction amplitude were significantly lower in the early AMD group (p < 0.05). There was no significant difference between groups in the latency or transient amplitude for both stimuli (p>0.05). ROC analysis showed excellent diagnostic accuracy for the blue PIPR metrics (AUC>0.9). Conclusions This is the initial report that the melanopsin controlled PIPR is dysfunctional in early AMD. The non-invasive, objective measurement of the ipRGC controlled PIPR has excellent diagnostic accuracy for early AMD.
Resumo:
Purpose Melanopsin-expressing retinal ganglion cells (mRGCs) have non-image forming functions including mediation of the pupil light reflex (PLR). There is limited knowledge about mRGC function in retinal disease. Initial retinal changes in age-related macular degeneration (AMD) occur in the paracentral region where mRGCs have their highest density, making them vulnerable during disease onset. In this cross-sectional clinical study, we measured the PLR to determine if mRGC function is altered in early stages of macular degeneration. Methods Pupil responses were measured in 8 early AMD patients (AREDS 2001 classification; mean age 72.6 ± 7.2 years, 5M, and 3F) and 12 healthy control participants (mean age 66.6 ± 6.1 years, 8M and 4F) using a custom-built Maxwellian-view pupillometer. Stimuli were 0.5 Hz sinewaves (10 s duration, 35.6° diameter) of short wavelength light (464nm, blue; retinal irradiance = 14.5 log quanta.cm-2.s-1) to produce high melanopsin excitation and of long wavelength light (638nm, red; retinal irradiance = 14.9 log quanta.cm-2.s-1), to bias activation to outer retina and provide a control. Baseline pupil diameter was determined during a 10 s pre-stimulus period. The post illumination pupil response (PIPR) was recorded for 40 s. The 6 s PIPR and maximum pupil constriction were expressed as percentage baseline (M ± SD). Results The blue PIPR was significantly less sustained (p<0.01) in the early AMD group (75.49 ± 7.88%) than the control group (58.28 ± 9.05%). The red PIPR was not significantly different (p>0.05) between the early AMD (84.79 ± 4.03%) and control groups (82.01 ± 5.86%). Maximum constriction amplitude in the early AMD group for blue (43.67 ± 6.35%) and red (48.64 ± 6.49%) stimuli were not significantly different to the control group for blue (39.94 ± 3.66%) and red (44.98 ± 3.15%) stimuli (p>0.05). Conclusions These results are suggestive of inner retinal mRGC deficits in early AMD. This non-invasive, objective measure of pupil responses may provide a new method for quantifying mRGC function and monitoring AMD progression.
Resumo:
High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have Implemented a Firewall with this architecture in reconfigurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using, our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results In both speed and area Improvement when It is Implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields.High throughput classification Invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly In terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware Implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for file worst case packet size. The Firewall rule update Involves only memory re-initialiization in software without any hardware change.
Resumo:
The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.
Resumo:
Books Paths to Readers describes the history of the origins and consolidation of modern and open book stores in Finland 1740 1860. The thesis approaches the book trade as a part of a print culture. Instead of literary studies choice to concentrate on texts and writers, book history seeks to describe the print culture of a society and how the literary activities and societies interconnect. For book historians, printed works are creations of various individuals and groups: writers, printers, editors, book sellers, censors, critics and finally, readers. They all take part in the creation, delivery and interpretation of printed works. The study reveals the ways selling and distributing books have influenced the printed works and the literary and print culture. The research period 1740 1860 covers the so-called second revolution of the book, or the modernisation of the print culture. The thesis describes the history of 60 book stores and their 96 owners. The study concentrates on three themes: firstly, how the particular book trade network became a central institution for printed works distribution, secondly what were the relations between cosmopolitan European book markets and the national cultural sphere, and thirdly how book stores functioned as cultural institutions and business enterprises. Book stores that have a varied assortment and are targeted to all readers became the main institution for book trade in Finland during 1740 1860. It happened because of three features. First, the book binders monopoly on selling bound copies in Sweden was abolished in 1740s. As a consequence entrepreneurs could concentrate solely to trade activities and offer copies from various publishers at their stores. Secondly the common business model of bartering was replaced by selling copies for cash, first in the German book trade centre Leipzig in 1770s. The change intensified book markets activities and Finnish book stores foreign connections. Thirdly, after Finland was annexed to the Russian empire in 1809, the Grand duchy s administration steered foreign book trade to book stores (because of censorship demands). Up to 1830 s book stores were available only in Helsinki and Turku. During next ten years book stores opened in six regional centres. The early entrepreneurs ran usually vertical businesses consisting of printing, publishing and distribution activities. This strategy lowered costs, eased the delivery of printed works and helped to create elaborated centres for all book activities. These book stores main clientele consisted of the Swedish speaking gentry. During late 1840s various opinion leaders called for the development of a national Finnish print culture, and also book stores. As a result, during the five years before the beginning of the Crimean war (1853 1856) book stores were opened in almost all Finnish towns: at the beginning of the war 36 book stores operated in 21 towns. The later book sellers, mainly functioning in small towns among Finnish speaking people, settled usually strictly for selling activities. Book stores received most of their revenues from selling foreign titles. Swedish, German, French and Belgian (pirate editions of popular French novels) books were widely available for the multilingual gentry. Foreign titles and copies brought in most of the revenues. Censorship inspections or unfavourable custom fees would not limit the imports. Even if the local Finnish print production steadily rose, many copies, even titles, were never delivered via book stores. Only during the 1840 s and 1850 s the most advanced publishers would concentrate on creating publishing programmes and delivering their titles via book stores. Book sellers regulated commissions were small. They got even smaller because of large amounts of unsold copies, various and usual misunderstandings of consignments and accounts or plain accidents that destroyed shipments and warehouses. Also, the cultural aim of a creating large and assortments and the tendency of short selling periods demanded professional entrepreneurship, which many small town book sellers however lacked. In the midst of troublesome business efforts, co-operation and mutual concern of the book market s entrepreneurs were the key elements of the trade, although on local level book sellers would compete, sometimes even ferociously. The difficult circumstances (new censorship decree of 1850, Crimean war) and lack of entrepreneurship, experience and customers meant that half of the book stores opened in 1845 1860 was shut in less than five years. In 1858 the few leading publishers established The Finnish Book Publishers Association. Its first task was to create new business rules and manners for the book trade. The association s activities began to professionalise the whole network, but at the same time the earlier independence of regional publishing and selling enterprises diminished greatly. The consolidation of modern and open book store network in Finland is a history of a slow and complex development without clear signs of a beginning or an end. The ideal book store model was rarely accomplished in its all features. Nevertheless, book stores became the norm of the book trade. They managed to offer larger selections, reached larger clienteles and maintained constant activity better than any other book distribution model. In essential, the book stores methods have not changed up to present times.
Resumo:
The dissertation describes the conscription of Finnish soldiers into the Swedish army during the Thirty Years' War. The work concentrates on so-called substitute soldiers, who were hired for conscription by wealthier peasants, who thus avoided the draft. The substitutes were the largest group recruited by the Swedish army in Sweden. The substitutes made up approximately 25-80% of the total number of soldiers. They recieved a significant sum of money from the peasants: about 50-250 Swedish copper dalers, corresponding to the price of a little peasant house. The practice of using substitutes was managed by the local village council. The recruits were normally from the landless population. However, when there was an urgent need of men, even the yeoman had to leave their homes for the distant garrisons across the Baltic. Conscription and its devastating effect on agricultural production also reduced the flow of state revenues. One of the tasks of the dissertation is the correlation between the custom of using substitutes and the abandonment of farmsteds (= in to the first place, to the non-ability to pay taxes). In areas where there were no substitutes available the peasants had to join the army themselves, which normally led to abandonment and financial ruin because agricultural production was based on physical labour. This led to rise of large farms at the cost of smaller ones. Hence, the system of substitutes was a factor that transformed the mode of settlement.
Resumo:
Large-scale gene discovery has been performed for the grass fungal endophytes Neotyphodium coenophialum, Neotyphodium lolii, and Epichloë festucae. The resulting sequences have been annotated by comparison with public DNA and protein sequence databases and using intermediate gene ontology annotation tools. Endophyte sequences have also been analysed for the presence of simple sequence repeat and single nucleotide polymorphism molecular genetic markers. Sequences and annotation are maintained within a MySQL database that may be queried using a custom web interface. Two cDNA-based microarrays have been generated from this genome resource. They permit the interrogation of 3806 Neotyphodium genes (NchipTM microarray), and 4195 Neotyphodium and 920 Epichloë genes (EndoChipTM microarray), respectively. These microarrays provide tools for high-throughput transcriptome analysis, including genome-specific gene expression studies, profiling of novel endophyte genes, and investigation of the host grass–symbiont interaction. Comparative transcriptome analysis in Neotyphodium and Epichloë was performed
Resumo:
Background: Crustaceans represent an attractive model to study biomineralization and cuticle matrix formation, as these events are precisely timed to occur at certain stages of the moult cycle. Moulting, the process by which crustaceans shed their exoskeleton, involves the partial breakdown of the old exoskeleton and the synthesis of a new cuticle. This cuticle is subdivided into layers, some of which become calcified while others remain uncalcified. The cuticle matrix consists of many different proteins that confer the physical properties, such as pliability, of the exoskeleton. Results: We have used a custom cDNA microarray chip, developed for the blue swimmer crab Portunus pelagicus, to generate expression profiles of genes involved in exoskeletal formation across the moult cycle. A total of 21 distinct moult-cycle related differentially expressed transcripts representing crustacean cuticular proteins were isolated. Of these, 13 contained copies of the cuticle_1 domain previously isolated from calcified regions of the crustacean exoskeleton, four transcripts contained a chitin_bind_4 domain (RR consensus sequence) associated with both the calcified and un-calcified cuticle of crustaceans, and four transcripts contained an unannotated domain (PfamB_109992) previously isolated from C. pagurus. Additionally, cryptocyanin, a hemolymph protein involved in cuticle synthesis and structural integrity, also displays differential expression related to the moult cycle. Moult stage-specific expression analysis of these transcripts revealed that differential gene expression occurs both among transcripts containing the same domain and among transcripts containing different domains. Conclusion: The large variety of genes associated with cuticle formation, and their differential expression across the crustacean moult cycle, point to the complexity of the processes associated with cuticle formation and hardening. This study provides a molecular entry path into the investigation of the gene networks associated with cuticle formation.
Resumo:
Background Fusion transcripts are found in many tissues and have the potential to create novel functional products. Here, we investigate the genomic sequences around fusion junctions to better understand the transcriptional mechanisms mediating fusion transcription/splicing. We analyzed data from prostate (cancer) cells as previous studies have shown extensively that these cells readily undergo fusion transcription. Results We used the FusionMap program to identify high-confidence fusion transcripts from RNAseq data. The RNAseq datasets were from our (N = 8) and other (N = 14) clinical prostate tumors with adjacent non-cancer cells, and from the LNCaP prostate cancer cell line that were mock-, androgen- (DHT), and anti-androgen- (bicalutamide, enzalutamide) treated. In total, 185 fusion transcripts were identified from all RNAseq datasets. The majority (76 %) of these fusion transcripts were ‘read-through chimeras’ derived from adjacent genes in the genome. Characterization of sequences at fusion loci were carried out using a combination of the FusionMap program, custom Perl scripts, and the RNAfold program. Our computational analysis indicated that most fusion junctions (76 %) use the consensus GT-AG intron donor-acceptor splice site, and most fusion transcripts (85 %) maintained the open reading frame. We assessed whether parental genes of fusion transcripts have the potential to form complementary base pairing between parental genes which might bring them into physical proximity. Our computational analysis of sequences flanking fusion junctions at parental loci indicate that these loci have a similar propensity as non-fusion loci to hybridize. The abundance of repetitive sequences at fusion and non-fusion loci was also investigated given that SINE repeats are involved in aberrant gene transcription. We found few instances of repetitive sequences at both fusion and non-fusion junctions. Finally, RT-qPCR was performed on RNA from both clinical prostate tumors and adjacent non-cancer cells (N = 7), and LNCaP cells treated as above to validate the expression of seven fusion transcripts and their respective parental genes. We reveal that fusion transcript expression is similar to the expression of parental genes. Conclusions Fusion transcripts maintain the open reading frame, and likely use the same transcriptional machinery as non-fusion transcripts as they share many genomic features at splice/fusion junctions.