Brennan Benavidez Title: Music and Markov Chains Introduction Explanation of what sound is scientifically, waveforms, the energy spectra and the instruments used for the projects A quick review on Markov Chain and Linear Difference The connection between the data collected, Markov Chains and Linear Difference Showcase the data that was found and what the data is telling us Dive in to a more detailed explanation of the data and compare the energy spectra of the instruments to see if they show us something similar or different Give a reason why the data shows that the energy spectra shows the data to be similar or different Conclusion Patrick Ekel Austin Purdie Title: Using Ordinary Least Squares to Predict National GDP. Into: (1 min) - Introduce ourselves and what we are studying - Introduce what we chose to do our project on - Explain what type of analysis we chose to implement Methodology: (2 min) - Brief overview of what Ordinary Linear Regression is and history with Grauss and Legrenge - Discuss what method we chose and the pros and cons of chosen method - Discuss how this related to linear algebra Analysis: (5 min) - Talk about our analysis and the fundamentals behind it - Talk about the data we used and design decisions we made - Relate our Analysis to class topic(s) - Talk about how we were either surprised or not surprised by findings Conclusion: (1 min) - Talk about Further research - Talk about what we would like to do or seen done in the future regarding this subject. - Ask if there are any questions . Chris Billingsley Bria Linza Fractals: Starting From The Base Abstract: Fractals are interesting pieces of art made through math. Fractals are theoretically infinite. They are patterns that build on each other. In this case, we will use linear transformations to build fractals. Different transformations will include translation, rotation, reflection, etc. We will use matrices to help out with the creation of fractals. There are many different ways to make fractals, one such example using Mandelbrot sets. These sets in particular use rectangles, and continually become smaller and smaller through linear transformations, depending on the number of iterations. Typically the number of iterations to create fractals is large, potentially going up to the tens or hundreds of thousands. Fractals are more than just mathematical art. They have their place in science and real world applications. In game design, fractals are used to generate worlds and landscapes. They are also found in biology through nature, with examples like trees and our nervous system. Understanding fractals could help us more in understanding ourselves and the world as an alternative perspective. Table of Contents: General Overview - 2 minutes Ways to Create Fractals (Mandelbrot set, etc.) - 3 minutes Examples of Fractals (writing out matrices, draw some. - 4 minutes Questions? – 1 minute Asher Sorensen Title: Photograph Manipulation Through Kernel Convolutions Abstract: Kernel convolutions are the most basic form of image manipulation. Kernel convolutions uses include but are not limited to: detecting edges, blurring, and scaling. The purpose of this research is to demonstrate the underlying processes of some of the most basic photograph filters. Through highlighting the variety of such processes, this research exhibits the importance of linear algebra and its impact on our daily lives. Table of Contents: Introduction Kernel Convolutions Sobel Operator~Approximation of the derivative of an image. Sobel Kernel~Matrix computed on each pixel to get the change in gradient. Canny Edge Detector Orientation of edges Mean Blur Gaussian Blur Gaussian Kernel Scaling Nearest Neighbor Linear interpolation Bicubic interpolation Josh Ulrich Enoch Sanz Abstract. Create a more accurate model for predicting body weight, assuming that individuals underestimate their weight randomly by 2-4%. Plot both the old and new models together, demonstrating the difference in prediction models between the old and new predictions. Discuss why our prediction model is more accurate and relevant for making predictions on a universal basis. Title: Least Squares Model to predict Body Weight Abstract: The purpose of this project is to create a more accurate model for predicting body weight, assuming that individuals underestimate their weight randomly by 2-4%. I will plot both the old and new models together, demonstrating the difference in prediction models between the old and new predictions. I have found that the least squares model is more accurate on a universal basis. Least squares line has the smallest possible value for the sum of the squares of the residuals, therefore creating a more accurate model. Work in progress: Not distinct from previously published 2270 projects (2012, 2016) nor is it different from the textbook examples (maple sources supplied by Lay-Lay). Brady Jacobson Samuel Teare Title: The Strengths and Weaknesses of Different Image Compression Techniques Abstract: The ability to retain a photo’s quality after compression is important in a world where media is paramount. For this project my group plans to test a variety of methods of image compression. Every compression technique will be tested on a number of photos. This will allow us to compare each side by side to keep track of details, pixels, coloration, and so on. If there are X compression methods and Y photos, then there will be X*Y individual compressions. The photos that we will use will vary in color, size, and detail, allowing us to gain an idea of the strengths and weaknesses of each method. Table of Contents: Background information on image compression, and how it has improved. A list of each method being tested in this experiment, and why their differences set each of them apart. A list of each photo being used for image compression, and why their differences matter. A description of the code that we wrote in order to recreate each method. Images of the photos before and after each compression. A description of the effectiveness of each photo, and in what situations you should use one over the other. Jasper Slaff Title: Using Homogeneous Coordinate Systems to Perform Linear Translations, Rotations, and Scaling on 3 Dimensional Objects in Computer Graphics and How These Operations Relate to Robotics. Abstract. Transformations both linear and affine as related to robotics. Table of Contents: 1. Present demonstration of Translation, Rotation, and Scaling on 2d graphics to create a 3d effect. 2. Talk over report on my findings over the semester. 3. I will also submit a pdf copy of the report. Saleema Qazi Gabrielle Legaspi Julie Vonesson Title: Hill Ciphers Abstract: We will discuss the history of hill cyphers as well as their strength compared to other methods of cryptography. We will detail the process of encrypting a message. First we will assign numerical numbers to an alphabet so a message can be easily represented as a matrix. Then, multiply the matrix by an invertible key matrix and convert back into the alphabet. Next, we will describe the process of decryption by multiplying with the inverse of the key matrix. We may also include a discussion of how to break the code with samples of text without the key matrix. Introduction: 1. History: Hill Ciphers were invented by Lester S. Hill in 1929 2. At the time of invention they were the most practical polygraphic substitution cipher because the process is simple and can be used on more than three symbols (a unique attribute for the time) 3. Hill ciphers use linear and modular algebra to encrypt a message 4. Basic Steps: Assign numerical values to an alphabet, convert message into a numerical matrix, multiply matrix by an invertible key matrix, convert the new matrix back into alphabet symbols to produce the encrypted message 5. The process is reversed through the same steps but using the inverse of the key matrix Hill Ciphers: a polygraphic substitution cipher. Uses linear algebra and cryptography to encrypt and decrypt messages. Ciphering Process: 1. Choose an invertible, square matrix of any size that contains non-negative integers less than the number of characters in the set used for the message. In our example we choose from all positive integers less than 26: [2 1 1] [3 2 1] [2 1 2] 2. Choose message to encipher, fex. MATHISFUN 2. Assign numbers to the corresponding letters of the alphabet (A-1, B-2, C-3, etc.) Use this character assignment to create a matrix with the letters of the message. 2. Multiply key matrix by the message matrix and then modulo every value of the resulting matrix by 26. 2. The final matrix contains the code word. Use the alphabet and number assignments to solve for the code word. Deciphering Process: 1. Find the inverse of the encoding matrix. This becomes the decoding matrix: [ 3 -1 -1] [-4  2  1] [-1  0  1] 2. Take the message to decode, “NFUIPHTAI” for example, and place it in a 3x3 matrix. Use the same character assignment as before to replace each letter with its corresponding number. 2. Multiply the decoding matrix and the matrix that holds the encoded text. Take each entry in the resulting matrix modulo 26. Replace each number with its corresponding letter to find the decrypted message. Breaking the Code: 1. Challenges to breaking the code 2. Weaknesses in the code 3. Finding the encrypting matrix with samples of encoded and decoded text Conclusion Anthony Kounalis Traffic flow and electronic sales. Abstract. Use a variety of websites to set up a matrix model for sales, the year of those sales, etc. Try to find a general formula to describe these sales. My goal is to use electronic sales from the past and to do simple prediction like either the total sales in 2016, or even try to predict the totals this year. Tom Steinbrecher Demand Elasticity with Two Stage Least Squares Regression Abstract. Show how TSLS regression, as an iterative form of OLS can be utilized to calculate the demand elasticity of a product. The paper will present the procedure along with assumptions as a method to understand the linear algebra powering the statistical reasoning. Andy Krivanek Basketball Stats Abstract. Develop a model for inferring the amount of points, rebounds and assists per game an NBA player would average over an 82 game season using the players position and minutes played per game. Investigate a model that uses height instead of position to determine stats. Conner Elaine Schacherer Title: DCT Compression Abstract: This paper will go over a general idea of how digital images are represented. Each picture is composed of pixels, which are represented either as grayscale or color. Uncompressed images can be quite large, if a colored image was 1080 by 1080, there would be 1080 * 1080 * 3 bytes needed to store the picture. However, using the DCT algorithm, a JPEG image can be created that is less than a tenth of that size. The DCT Compression algorithm creates a JPEG image. This type of compression is lossy. The other type of compression is lossless. Lossless compression keeps all of the image data, but stores it using less memory. Lossy compression loses some data but is able to create a very similar image to the original. The Discrete Cosine Transform algorithm is part of the Fourier transformations and it uses matrices to store information in pixels. The information is stored in 8x8 matrices and then it quantizices the information and sets a certain amount of pixels in the matrix to zero depending on how much compression you want to use. Then it compresses the new matrix using the Huffman encoding to store all of the information in coefficients in a zig-zagged path. Then if you reverse the process, you get the restored image, with some data loss, but depending on how intense the quantization is set, it will be very similar to the original image. This paper will end by comparing the DCT to other algorithms that compress images. Table of Contents: 1) How Images are Represented 2) Lossy vs. Lossless Compression 3) The DCT Algorithm 4) Comparison with Different Rates of Compression 5) Comparison of DCT with other Compression Algorithms Mark Lavelle Least-Squares multiple regression Abstract. Write a script in the statistical package R to perform Ordinary Least-Squares multiple regression on a data set provided in '.csv' format. The output of such a script would include the overall model fit, partial regression coefficients for all regressors, including all their interactions, significance tests and confidence intervals for each regressor, and multiple graphical depictions of the data. Miriam Galecki Tile: Principle Component Analysis Abstract. Demonstrate how PCA works. Explain the process using eigenvectors and an example. The example will be a large data set gathered from an analysis of the shape of barrier islands. I will explain what PCA is using a visual demonstration on a large data set and code in Matlab. Table of Contents: Introduction of what PCA is and why you use it Step by step general example as for how PCA is carried out -Start out with a matrix of raw scores -I will then demonstrate how to get a matrix of deviations score, and deviation of score sums, to find the variance/ covariance matrix. -Because the [covariance matrix][Eigenvector]=[eigenvalues][eigenvector] we know that by finding eigenvalues and eigenvectors of the covariance matrix, we find that the eigenvalues with the largest eigenvalues correspond to the dimensions that have the strongest correlation in the dataset. The following is the ‘Principal Component’ -First find the eigenvalues of the covariance matrix -Using the eigenvalues find the eigenvectors -Introduce the theorem of how each new xj can be written overview of what the PCA did -The mean of the old data becomes the origin of the newdata after PCA is performed, and the eigenvectors become the new asises. -The mean of the old data becomes the origin of the newdata after PCA is performed, and the eigenvectors become the new asises. -Show an example I created using a huge data set and Matlab Adam Lee Title: Image Compression with Haar Wavelets Abstract The Haar wavelet transform was created in 1910 by Alfred Haar. This project uses the Haar transform to construct an orthonormal basis, and then apply it to image compression. The compression process, transforms an image with the orthonormal basis, reduces it with a threshold value, epsilon, and then reverses the transform. This procedure compresses an image by slowly averaging adjacent pixels together. An example is shown with a grayscale image with the process extended to color images with the included MATLAB script. Jonah Barber Discrete dynamic systems and continous approximations Abstract. Investigate discrete dynamic systems that are approximations of continuous ones. Focus on model accuracy and related velocity and acceleration vector fields. Alexandra Bertagnolli Joshua Rosen Title: Using Markov Chains to Procedurally Generate Text Abstract: Markov chains can be utilized to reveal patterns in text and human speech. We will program a Markov chain which generates statistically probable text based on its input. When the user gives the program a text file, the program calculates the likelihood of which words follow each other. Each word is then assigned a transition matrix, in which each element of the matrix denotes the probability of which word occurs next. The program chooses and displays the most likely series of words to occur using the elements of these transition matrices. We intend to use this program to analyze patterns and bias in text. Project Outline: We intend to run political speeches or famous literature through our program and analyze the patterns detected. This project will be completed in Java. The steps for the program are as follows: For each unique word in the input file, calculate which words could possibly follow. Represent this statistical likelihood with a transition matrix for each unique word. Select a random starting word, and set it as the current word to operate upon. Produce a word to follow the current word using a weighted random decision. The weight is based on the transition matrix of the current word. Note that because Markov Chains do not have state, each word is generated independently of the rest of the text. Continue to produce words as many times as needed based upon the user-specified size of the output. Table of Contents: Abstract Description of Markov Chains Brief description of Project process Programming Implementation Important Code Snippets Example of Generated Text Analysis of Generated Text Possible Extensions/Uses Rahul Ramkumar Samuel Bridge Using Convolutional Neural Networks to classify images of handwritten text Abstract: Convolutional Neural Networks are a type of neural network commonly used to identify objects in photos. This project will explore how a Convolutional Neural Net uses linear algebra to “learn” how to read handwritten numbers. Jeremy Jako Nathan Rogers Title: Applications in genetics Abstract: Using linear algebraic techniques to model heredity. Particularly, how the genes AA, Aa, and aa are passed down from generation to generation. The process of heredity is pretty simple, but it can be useful to know how heredity is affected over longer periods of time and these calculations can be simplified using linear algebra. Some of the techniques we will use in particular include: Diagonalization, Eigenvalues/vectors, and the inverse of a matrix. Table of contents: - Introduction - graphs/ visualization of the problem at hand - solving the problem - conclusion and explanation of techniques used. Belle Barnes Veronika Gribenko Title: Productivity of the 2008 Economy Abstract: The data on 2008 use of commodities by industry will be used to construct a consumption matrix in conjunction with section 2.6 of our textbook. The resulting matrix will be used to determine the productivity of the 2008 US Economy. The data set is taken from the United States Bureau of Economic Analysis. In 2008, the US market crashed and started to be rebuilt again. A consumption matrix will show if the market was profitable. This will be a beneficial examination of the market to determine patterns and the depth of the issues within the economy. Table of Contents: 1. Give some background info on 2008 events and the hypothesis on how they could have affected the productivity of the economy 2. Present the completed matrix based on the data from United States Bureau of Economic Analysis, explain what it means (input/output) 3. Present the calculations done with the matrix and what they represent, including the necessary manipulation of data to perform calculations. 4. Conclude based on the matrix’s (I-A)^-1 results (A being our consumption matrix) whether the economy was productive or not. 5. How can this information benefit the decisions made? How is this relevant to economics? 6. General summary of work and overarching concepts Dylan Johnson Title: Intersections of Graph Theory and Linear Algebra Abstract: Graphs are an incredibly versatile structure insofar as they can model everything from the modernity of computer science and complexity of geography, to the intricacy of linguistic relationships and the universality of chemical structures. Representing such graphs as matrices only enhances the computational aspects of this modeling. Ultimately, this necessitates linear algebra. This paper will explore the relationships between graph theory, their associated matrix representations, and the matrix properties found in linear algebra. It will explore not only the adjacency matrices of graphs, but also the more interesting examples found in incidence matrices, path matrices, distance matrices, and Laplacian matrices. Investigations will include the utility, or lack thereof, of such matrix representations for various classes of graphs, including disconnected graphs, complete graphs, and trees. In order to achieve this goal, the paper will present some of the most interesting theorems regarding matrix representations of graphs, and will tie these theorems back to answering questions in graph theory itself. Presentation Outline: I will skim over adjacency matrices seeing as we had a lab regarding them, and likely skip path and distance matrices altogether, because adjacency, incidence, and Laplacian matrices are interrelated and provide much more interesting examples. 1. Basic Graph Theory a. Set definition of a graph b. Equivalent matrix definition of a graph 2. Adjacency Matrices a. Definition of an adjacency matrix b. Utility in finding distances and counting paths c. Properties of eigenvalues of a graph 3. Incidence Matrices a. Definition of incidence matrices b. Relationship between incidence and rank c. Utility of this relationship in counting components 4. Path Matrices (in paper, but not in presentation) 5. Distance Matrices (in paper, but not in presentation) 6. Laplacian Matrices a. Definition of Laplacian matrices b. Relationship between Laplacian matrices, adjacency matrices, and incidence matrices c. Some interesting properties: i. cofactor expansion ii. eigenvalues iii. the Matrix Tree Theorem Joseph Pugliano Brandon Sehestedt Title: Cryptography: Matrices and Encryption Abstract: A study of hill cyphers and how to generate keys to encode and decode messages. We will also show the effectiveness of hill cyphers and investigate other means to encrypt messages that might be more effective. Table of Contents: • Intro – One minute • What is a hill cypher and its history – Four minutes • How to generate a key and decode them together – Two minutes • Discussion on how effective the encoding is – Two minutes • Brief discussion on more effective encryption methods – One minute Mark Van der Merwe Andrew Haas Ann Wilcox Mark Van der Merwe, Andrew Haas, Ann Wilcox 3/9/17 Linear Algebra Semester Project Project: Comparison and Optimization of SVD and DCT Image Compression Algorithms Abstract: For our project we will be comparing the efficiency of SVD and DCT. We will be comparing the how significant the visual deformation is to the decrease in file size for each compression algorithm. A bitmap image is compressed using singular value decomposition (SVD). Using Principal Component Analysis (PCA), we can throw out certain terms within a storage matrix, which will lower image quality but allow it to be stored for a fraction of the file size. Using compression algorithm Discrete Cosine Transform (DCT), we can separate an image into its frequencies and only keep "important" frequencies, again allowing us to shave off file size by sacrificing image quality. After comparing the two algorithms we will find the point where the file size is decreased as much as possible without a significant difference in image quality. Table of Contents: 1. Singular Value Decomposition Compression Algorithm a. Procedure of SVD Compression i. Decompose into Matrices ii. Remove values iii. Recompose Matrix b. Testing on Images i. Compression ratio c. Data Analysis i. Optimization of Algorithm 1. Image quality vs. Compression ratio. 2. Discrete Cosine Transform Compression Algorithm a. Procedure of DCT Compression i. Separate image into frequencies ii. Remove “unimportant” frequencies b. Testing on Images i. Compression ratio c. Data Analysis i. Optimization of Algorithm 1. Image quality vs. Compression ratio. 3. Comparison of SVD and DCT a. Comparison of algorithm “efficiency” Joe Narus Title: Image Compression and Quality Abstract: Using a grayscale image and the matlab coding language, I will explore two different types of image compression: DCT and SVD and determine what values are the best ones for maintaining quality, while also achieving a sufficient level of compression for both methods. Lastly, I’ll briefly touch on other methods of compression, such as GIF and PNG, as a means of comparison for these two methods. Introduction: Over the years, digital photography is becoming a larger part of our lives. We view hundreds of photos every day from the palm of our hands! Millions of these digital photos are stored all over the internet and take up a vast amount of memory, which is where image compression comes in handy. The main goal of image compression is to limit the amount of redundancy in an image so that it can be saved in a relatively efficient way that doesn’t take up a lot of memory while also maintaining as much of the original image quality as possible. There are two categories of image compression: lossless and lossy . Lossless image compression, found in GIF, PNG, and TIFF file formats, is generally used for important images and online storage. Lossy compression, such as the Discrete Cosine Transform (DCT) algorithm and Singular Value Decomposition (SVD) compression, is mostly found within applications where bit rates in file transfers need to be low and quality isn’t entirely necessary. Table of Contents: 1. Title slide 2. Explanation of why image compression is important 3. Explanation of Lossless vs Lossy compression 4. Explanation of DCT algorithm and its results 5. Explanation of SVD algorithm and its results 6. Compare to the compression algorithms of GIF and PNG Matt Westberg Title: Cracking the Hill-Cipher: how to break an encrypted message Abstract: Password encryption is used today in storing passwords in databases, and even to pass messages. While encrypting messages can be useful when attempting to pass a message, this project focuses more on the cracking of these encrypted messages. When enough sample sizes are gathered from both the message, and the coded message, the key matrix can be solved. There are many ways to crack messages, but this project will focus primarily on cracking the Hill-Cipher. Background information: The Hill-Cipher is an encrypted method based on linear algebra that takes a message then converts it to numbers assigned to each letter in that message. This numerical message is then transformed into a matrix, which is then multiplied by a key matrix that is invertible. After doing so, this matrix undergoes modulo operation of the size of alphabet. For instance, if the alphabet is standard from A-Z, one would modulo 26 for the 26 letters. The decryption process is the opposite. One will find the inverse of the key matrix, continuing to apply the modulo value, to keep the numbers within the range of the alphabet. After doing so, one multiplies the inverse matrix with the coded numerical values. After applying the modulo operation, this will give the original message. Cracking the Hill-Cipher: Cracking the code means obtaining the key matrix so that if any other encrypted message is given one would be able to decode the message. When attempting to crack a code, it is important that enough information is gathered. If the key matrix is a N x N matrix, it will be necessary to obtain N^2 messages and encryptions. For example, if the key matrix is a 2x2 matrix, it will be necessary to obtain 2 messages and their corresponding 2 encryptions. Summary of results: The results that we found were that if given enough information, it was in fact possible to find the key matrix to the encryption. The trick is to have enough message to find linearly independent column vectors. If column vectors (encrypted numerical values) are linearly dependent, then those column vectors cannot be used in the process, and more data will be needed. We found that for a 2x2 key matrix, 2 encrypted messages and their 2 plaintext messages were sufficient to find the key matrix. To make the process more complex, we found that if someone were to use a 3x3 key matrix or a 4x4 key matrix, more intercepted messages and their meanings would be necessary to find the key matrix. Based on the results that we found, one strategy to increase the security of the encrypted code would be to use a larger key matrix. Mathematically speaking, as the key matrix increases in size, the number of intercepted messages and their meanings that are needed to crack the code would rise in a quadratic manner.