Bioinformatics Core Facility BMC
print


Breadcrumb Navigation


Content

Services

Service levels

  • The standard way to interact with the facility is a collaboration that we offer to research groups within the BMC as well as to researchers outside the institute or outside the LMU Munich. 
  • We also offer to include the facility into research networks, in which all members can claim bioinformatic support. The facility will ask for a third-party funded staff member as part of such a solution. 
  • Research projects that require intensive bioinformatic analyses are advised to consider embedding a scientist into the facility. This researcher will conduct computational analyses under supervision. If you consider embedding it is advisable to contact us as soon as possible to reserve desk space.
  • Consultation: regarding experimental design and statistics we are open to discuss and consult any researcher at any time.
  • Training

At present all services except extramural training courses are free of charge. We operate on a first-come first-serve basis but presume to prioritize studies that are close to submission for publication. We expect that our contribution to a publication gets documented by either authorship or acknowledgement depending on the amount of contributed work and intellectual input.

Service details

We can provide a wide range of bioinformatic analyses and services. The following list details some of the most requested ones.

Experimental Design

Bioinformatic data analysis can only be as good as the quality of the data provided. To ensure reliable and straight-forward interpretation of analysis results it is crucial to design the experiment with the data analysis procedures already in mind. Therefore, we ask our potential collaborators to consult us before performing experiments to decide on important parameters together such as:

  • What is the optimal experimental strategy to identify effects of interest?
  • What is the required number of biological replicates for statistical testing?
  • Which high-throughput technology is best suited?

Data Preprocessing

Data from several high throughput techniques require an initial processing step before it can be numerically interpreted. These are usually standardized procedures that can be run in automated or semi-automated manner. Turnaround times are frequently less than 24 hours.

  • Quality control
  • NGS data: alignment, peak calling, coverage vectors, enrichment calculations etc.
  • Microarray data: normalization, extraction of raw intensities etc.
  • Feature extraction and quantification of microscopic images.
  • Generation of browseable data and/or spreadsheet files
  • Preparation of data for deposition in public repositories as requested by many journals upon manuscript submission.

Downstream analyses

Hypothesis-driven or explorative computational analyses that require intensive exchange between collaboration partners. These steps have typically long turnaround times that can be substantially decreased by formulating specific questions/hypotheses.

  • Visualization of complex data
  • Summary statistics and statistical testing
  • Correlation analyses, machine learning
  • Integrative analysis combining different data types, experiments and data downloaded from repositories

Data deposition in public archives (such as GEO, SRA, ENA)

Programming databases, re-usable scripts, web and stand-alone applications

Given our rather limited resources we would engage in programming services only if a larger usergroup would benefit and/or if the service is covered by third-party funded staff as part of a network project.