首页 > > 详细

讲解 High-Dimensional Analysis辅导 Python编程

High-Dimensional Analysis

Weighting: 20%

Late-onset Alzheimer’s disease (LOAD) is not well understood in terms of its causes and suitable treatments. A deep understanding of relevant genetic factors could allow the development of effective drugs or other interventions to treat or prevent this condition. Zhang et al. (2013) published a study which looked at gene expression in the prefrontal cortex (PFC) of a large number of deceased donor subjects, with slightly over half having had LOAD, and   the remainder being cognitively healthy. These were provided by the Harvard Brain Tissue Resource Center (HBTRC) at McLean Hospital in Belmont, Massachusetts, USA (Boston). Each subject had been diagnosed with respect to LOAD while still alive, with further extensive pathology examination after death.

The datasets were made public, with the brain dataset now available from https://www.ncbi.nlm.nih.gov/geo/geo2r/?acc=GSE44772and also from Blackboard. We ask that you use the Blackboard version as this has had a small amount of imputation done, and    we will only consider part of the original dataset.

The patient condition labels are stored in brain_sample_descriptions_PFC.csv : normal (“N”) or with Alzheimer’s (“A”), along with their age in years at time of death and sex (“M” or “F”). The batch-normalised, logged and otherwise processed gene expression data is stored in braindat.csv. This processed dataset contains gene expression data derived from brain samples from 230 people, processed with microarrays to record values for 39280 probes, each of which is intended to represent a different gene or other transcript.

In addition to the scientific interest in the differences in the gene expression of subjects with  LOAD and those without, it is of interest to be able to determine whether a deceased subject  would have been diagnosed with LOAD. In time, it might be possible to use gene expression profile try to predict this in living subjects also.

Your tasks with the dataset are focused on classification of a sample as coming from a patient with late-onset Alzheimer’s disease or without, and identification of genes of potential interest.

You should select one classifier for the task of classification, which you have not used in previous assignments. Probability-based classifiers discussed in this course include linear, quadratic, mixture and kernel density discriminant analysis. Non-probability-based classifiers discussed include k nearest neighbours, neural networks, support vector machines, classification trees, random forests and boosted ensembles. All of these are implemented via various packages available in R and Python. If you wish to use a different method, please check with the lecturer. In addition, you will make use of lasso-penalised logistic regression. Note that you cannot choose another form. of logistic regression as your other classifier.

The number of observations is less than the number of variables, and so some form. of dimensionality reduction is needed for most forms of probability-based classifier and can be used if desired with the non-probability-based classifiers.

Here we consider analysis of this data to

(i) develop a model which is capable of accurately predicting the class (Alzheimer’s or normal) of new observations.

(ii) determine which genes are expressed differently between the two groups, individually, or as part of a combination.

Discriminant analysis/supervised classification can be applied to solve (i), and in combination with feature (predictor) selection, can be used to provide a limited solution to (ii) also.  Other  methods such as single-variable analysis can also be applied to attempt to answer (ii). You should use R or Python for the assignment.

Tasks:

1.   (3 marks) Perform. principal component analysis of the gene expression dataset and report and comment on the results. Detailed results should be submitted via a separate file, including what each principal component direction is composed of in terms of the (transformed) original explanatory variables, with some explanation in the main report about what is in the file. Give a plot or plots which shows the individual proportions of variance explained by each component up to the first 30. Also produce and include another plot about the principal components which you think would be of interest to scientists and clinicians such as Zhang et. al, along with some explanation and discussion. The R package FactoMineR is a good option for PCA.

2.   (3 marks) Perform. single variable analysis of the dataset with respect to the genetic probes, looking for a relationship with the response variable (the class). In doing so, use a linear model and adjust for both age and sex. Use the Benjamini-Hochberg (1995) or Benjamini-Yekutieli (2001) approach to control the false discovery rate to be at most 0.1 with respect to the probe hypothesis tests. Explain the assumptions of this approach and whether or not these are likely to be met by this dataset, along with possible consequences of any violations. Give pseudocode for the method you use. Give a plot of the log of gene  index, ordered by p-value, versus the log ofunadjusted p-value, along with a line indicating the FDR control (similar to Figure 18.19 from Hastie et al., 2009). Determine which genes are then declared significant along with the resulting threshold in the original p-values and report the count of these. If you find more than 30 genes significant, list only the top 30 in your report, along with their p-values (adjusted for the two covariates mentioned, but not adjusted for the purposes of FDR).

Within the R stats package (built-in) is the function p.adjust, which offers this method. More advanced implementations include the fdrame package in Bioconductor.

3.   (2 marks) Define binary logistic regression with a lasso penalty mathematically, including the function to be optimised and briefly introduce a method than can be used to optimise it. Note that this might require a little research.

4.   (1 mark) Explain the potential benefits and drawbacks of using PCA to reduce the dimensionality of the data before attempting to fit each type of classifier to this dataset. Decide whether you will use PCA to reduce dimensionality for each classifier and justify this decision.

5. Apply each classification method (your choice and lasso logistic regression) to the dataset using R or Python.

For lasso logistic regression in R, I suggest you use the glmnet package, available in CRAN, and make use of the function cv.glmnet and the family=“binomial” option. If you are interested, there is a recording of Trevor Hastie giving a tutorial on the lasso and glmnet at http://www.youtube.com/watch?v=BU2gjoLPfDc . There are other options in Python including in scikit-learn.

a) (1 mark) For your chosen classifier, characterise each fitted class by reporting parameter estimates or a reasonable alternative.

b) (2 marks) For lasso logistic regression, you will need to use cross-validation to estimate of the optimal value of λ . Explain how you plan to search over possible values. Then produce and explain a graph of your cost function versus λ. You should also produce a list ordered by importance of the genes included as predictor variables in the optimal classifier, along with    their estimated coefficients.

For your chosen classifier, also determine an ordered list of the most important genes, stopping at 30, or earlier if justified. For each classifier, comment on any differences between the apparent and CV-derived overall error rates.

c) (2 marks) For both classifiers, give cross-validation (CV)-based estimates of the overall and class-specific error rates: obtained by training the classifier on a large fraction of the whole dataset and then applying it to the remaining data and checking error rates. You may use K-fold cv with K ≥ 5 or leave-one-out cross-validation to estimate performance. Additionally report the overall apparent error rates (when trained on all the data and applied back to it).

6. (3 marks) Compare the results from all approaches to analysis of the dataset (PCA, single- variable analysis and the two classifiers). Explain what each approach seems to offer, including consideration of these results as an example. In particular, if you had to suggest 10 genes for the biologists to study further for possible links to LOAD, which ones would you   prioritise, and what makes you think they are worth studying further?

7. (3 marks) Mathematically define the partial correlation between two variables, assuming they come from ap-dimensional joint distribution. Consider only the probes which were selected using the lasso logistic regression. If you found more than 30, choose the 30 probes with largest absolute value of coefficient. For this set of probes, estimate two Gaussian graphical models using the graphical lasso – one for subjects who were diagnosed with LOAD, and one for those who were not. Produce two graph to represent the partial correlation matrices for each group, ignoring any partial correlations with absolute value less than 0.1. The graphs should include relevant edges, node labels and indicate the strength of any dependence shown. What are the main differences between the two graphs? Note: you do not need to find out more about what each probe represents genetically.




联系我们
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 99515681 微信:codinghelp
程序辅导网!