백종현

Add source_code

Showing 94 changed files with 2074 additions and 0 deletions
No preview for this file type
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
# dependencies
/node_modules
/.pnp
.pnp.js
# testing
/coverage
# production
/build
# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*
This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app).
## Available Scripts
In the project directory, you can run:
### `npm start`
Runs the app in the development mode.<br />
Open [http://localhost:3000](http://localhost:3000) to view it in the browser.
The page will reload if you make edits.<br />
You will also see any lint errors in the console.
### `npm test`
Launches the test runner in the interactive watch mode.<br />
See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information.
### `npm run build`
Builds the app for production to the `build` folder.<br />
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.<br />
Your app is ready to be deployed!
See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information.
### `npm run eject`
**Note: this is a one-way operation. Once you `eject`, you can’t go back!**
If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project.
Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
## Learn More
You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started).
To learn React, check out the [React documentation](https://reactjs.org/).
This diff could not be displayed because it is too large.
{
"name": "capstone",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^4.2.4",
"@testing-library/react": "^9.5.0",
"@testing-library/user-event": "^7.2.1",
"@types/jest": "^24.9.1",
"@types/node": "^12.12.37",
"@types/react": "^16.9.34",
"@types/react-dom": "^16.9.7",
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-scripts": "3.4.1",
"typescript": "^3.7.5"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
No preview for this file type
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="theme-color" content="#000000" />
<meta
name="description"
content="Web site created using create-react-app"
/>
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
<!--
manifest.json provides metadata used when your web app is installed on a
user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/
-->
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
<!--
Notice the use of %PUBLIC_URL% in the tags above.
It will be replaced with the URL of the `public` folder during the build.
Only files inside the `public` folder can be referenced from the HTML.
Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
work correctly both with client-side routing and a non-root public URL.
Learn how to configure a non-root public URL by running `npm run build`.
-->
<title>React App</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
</body>
</html>
{
"short_name": "React App",
"name": "Create React App Sample",
"icons": [
{
"src": "favicon.ico",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
},
{
"src": "logo192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "logo512.png",
"type": "image/png",
"sizes": "512x512"
}
],
"start_url": ".",
"display": "standalone",
"theme_color": "#000000",
"background_color": "#ffffff"
}
# https://www.robotstxt.org/robotstxt.html
User-agent: *
Disallow:
.App {
text-align: center;
}
.App-logo {
height: 40vmin;
pointer-events: none;
}
@media (prefers-reduced-motion: no-preference) {
.App-logo {
animation: App-logo-spin infinite 20s linear;
}
}
.App-header {
background-color: #282c34;
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: calc(10px + 2vmin);
color: white;
}
.App-link {
color: #61dafb;
}
@keyframes App-logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
import React from 'react';
import { render } from '@testing-library/react';
import App from './App';
test('renders learn react link', () => {
const { getByText } = render(<App />);
const linkElement = getByText(/learn react/i);
expect(linkElement).toBeInTheDocument();
});
import React from 'react';
import logo from './logo.svg';
import './App.css';
function App() {
return (
<div className="App">
<header className="App-header">
<img src={logo} className="App-logo" alt="logo" />
<p>
Edit <code>src/App.tsx</code> and save to reload.
</p>
<a
className="App-link"
href="https://reactjs.org"
target="_blank"
rel="noopener noreferrer"
>
Learn React
</a>
</header>
</div>
);
}
export default App;
body {
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
code {
font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',
monospace;
}
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import * as serviceWorker from './serviceWorker';
ReactDOM.render(
<React.StrictMode>
<App />
</React.StrictMode>,
document.getElementById('root')
);
// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://bit.ly/CRA-PWA
serviceWorker.unregister();
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 841.9 595.3">
<g fill="#61DAFB">
<path d="M666.3 296.5c0-32.5-40.7-63.3-103.1-82.4 14.4-63.6 8-114.2-20.2-130.4-6.5-3.8-14.1-5.6-22.4-5.6v22.3c4.6 0 8.3.9 11.4 2.6 13.6 7.8 19.5 37.5 14.9 75.7-1.1 9.4-2.9 19.3-5.1 29.4-19.6-4.8-41-8.5-63.5-10.9-13.5-18.5-27.5-35.3-41.6-50 32.6-30.3 63.2-46.9 84-46.9V78c-27.5 0-63.5 19.6-99.9 53.6-36.4-33.8-72.4-53.2-99.9-53.2v22.3c20.7 0 51.4 16.5 84 46.6-14 14.7-28 31.4-41.3 49.9-22.6 2.4-44 6.1-63.6 11-2.3-10-4-19.7-5.2-29-4.7-38.2 1.1-67.9 14.6-75.8 3-1.8 6.9-2.6 11.5-2.6V78.5c-8.4 0-16 1.8-22.6 5.6-28.1 16.2-34.4 66.7-19.9 130.1-62.2 19.2-102.7 49.9-102.7 82.3 0 32.5 40.7 63.3 103.1 82.4-14.4 63.6-8 114.2 20.2 130.4 6.5 3.8 14.1 5.6 22.5 5.6 27.5 0 63.5-19.6 99.9-53.6 36.4 33.8 72.4 53.2 99.9 53.2 8.4 0 16-1.8 22.6-5.6 28.1-16.2 34.4-66.7 19.9-130.1 62-19.1 102.5-49.9 102.5-82.3zm-130.2-66.7c-3.7 12.9-8.3 26.2-13.5 39.5-4.1-8-8.4-16-13.1-24-4.6-8-9.5-15.8-14.4-23.4 14.2 2.1 27.9 4.7 41 7.9zm-45.8 106.5c-7.8 13.5-15.8 26.3-24.1 38.2-14.9 1.3-30 2-45.2 2-15.1 0-30.2-.7-45-1.9-8.3-11.9-16.4-24.6-24.2-38-7.6-13.1-14.5-26.4-20.8-39.8 6.2-13.4 13.2-26.8 20.7-39.9 7.8-13.5 15.8-26.3 24.1-38.2 14.9-1.3 30-2 45.2-2 15.1 0 30.2.7 45 1.9 8.3 11.9 16.4 24.6 24.2 38 7.6 13.1 14.5 26.4 20.8 39.8-6.3 13.4-13.2 26.8-20.7 39.9zm32.3-13c5.4 13.4 10 26.8 13.8 39.8-13.1 3.2-26.9 5.9-41.2 8 4.9-7.7 9.8-15.6 14.4-23.7 4.6-8 8.9-16.1 13-24.1zM421.2 430c-9.3-9.6-18.6-20.3-27.8-32 9 .4 18.2.7 27.5.7 9.4 0 18.7-.2 27.8-.7-9 11.7-18.3 22.4-27.5 32zm-74.4-58.9c-14.2-2.1-27.9-4.7-41-7.9 3.7-12.9 8.3-26.2 13.5-39.5 4.1 8 8.4 16 13.1 24 4.7 8 9.5 15.8 14.4 23.4zM420.7 163c9.3 9.6 18.6 20.3 27.8 32-9-.4-18.2-.7-27.5-.7-9.4 0-18.7.2-27.8.7 9-11.7 18.3-22.4 27.5-32zm-74 58.9c-4.9 7.7-9.8 15.6-14.4 23.7-4.6 8-8.9 16-13 24-5.4-13.4-10-26.8-13.8-39.8 13.1-3.1 26.9-5.8 41.2-7.9zm-90.5 125.2c-35.4-15.1-58.3-34.9-58.3-50.6 0-15.7 22.9-35.6 58.3-50.6 8.6-3.7 18-7 27.7-10.1 5.7 19.6 13.2 40 22.5 60.9-9.2 20.8-16.6 41.1-22.2 60.6-9.9-3.1-19.3-6.5-28-10.2zM310 490c-13.6-7.8-19.5-37.5-14.9-75.7 1.1-9.4 2.9-19.3 5.1-29.4 19.6 4.8 41 8.5 63.5 10.9 13.5 18.5 27.5 35.3 41.6 50-32.6 30.3-63.2 46.9-84 46.9-4.5-.1-8.3-1-11.3-2.7zm237.2-76.2c4.7 38.2-1.1 67.9-14.6 75.8-3 1.8-6.9 2.6-11.5 2.6-20.7 0-51.4-16.5-84-46.6 14-14.7 28-31.4 41.3-49.9 22.6-2.4 44-6.1 63.6-11 2.3 10.1 4.1 19.8 5.2 29.1zm38.5-66.7c-8.6 3.7-18 7-27.7 10.1-5.7-19.6-13.2-40-22.5-60.9 9.2-20.8 16.6-41.1 22.2-60.6 9.9 3.1 19.3 6.5 28.1 10.2 35.4 15.1 58.3 34.9 58.3 50.6-.1 15.7-23 35.6-58.4 50.6zM320.8 78.4z"/>
<circle cx="420.9" cy="296.5" r="45.7"/>
<path d="M520.5 78.1z"/>
</g>
</svg>
/// <reference types="react-scripts" />
// This optional code is used to register a service worker.
// register() is not called by default.
// This lets the app load faster on subsequent visits in production, and gives
// it offline capabilities. However, it also means that developers (and users)
// will only see deployed updates on subsequent visits to a page, after all the
// existing tabs open on the page have been closed, since previously cached
// resources are updated in the background.
// To learn more about the benefits of this model and instructions on how to
// opt-in, read https://bit.ly/CRA-PWA
const isLocalhost = Boolean(
window.location.hostname === 'localhost' ||
// [::1] is the IPv6 localhost address.
window.location.hostname === '[::1]' ||
// 127.0.0.0/8 are considered localhost for IPv4.
window.location.hostname.match(
/^127(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}$/
)
);
type Config = {
onSuccess?: (registration: ServiceWorkerRegistration) => void;
onUpdate?: (registration: ServiceWorkerRegistration) => void;
};
export function register(config?: Config) {
if (process.env.NODE_ENV === 'production' && 'serviceWorker' in navigator) {
// The URL constructor is available in all browsers that support SW.
const publicUrl = new URL(
process.env.PUBLIC_URL,
window.location.href
);
if (publicUrl.origin !== window.location.origin) {
// Our service worker won't work if PUBLIC_URL is on a different origin
// from what our page is served on. This might happen if a CDN is used to
// serve assets; see https://github.com/facebook/create-react-app/issues/2374
return;
}
window.addEventListener('load', () => {
const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`;
if (isLocalhost) {
// This is running on localhost. Let's check if a service worker still exists or not.
checkValidServiceWorker(swUrl, config);
// Add some additional logging to localhost, pointing developers to the
// service worker/PWA documentation.
navigator.serviceWorker.ready.then(() => {
console.log(
'This web app is being served cache-first by a service ' +
'worker. To learn more, visit https://bit.ly/CRA-PWA'
);
});
} else {
// Is not localhost. Just register service worker
registerValidSW(swUrl, config);
}
});
}
}
function registerValidSW(swUrl: string, config?: Config) {
navigator.serviceWorker
.register(swUrl)
.then(registration => {
registration.onupdatefound = () => {
const installingWorker = registration.installing;
if (installingWorker == null) {
return;
}
installingWorker.onstatechange = () => {
if (installingWorker.state === 'installed') {
if (navigator.serviceWorker.controller) {
// At this point, the updated precached content has been fetched,
// but the previous service worker will still serve the older
// content until all client tabs are closed.
console.log(
'New content is available and will be used when all ' +
'tabs for this page are closed. See https://bit.ly/CRA-PWA.'
);
// Execute callback
if (config && config.onUpdate) {
config.onUpdate(registration);
}
} else {
// At this point, everything has been precached.
// It's the perfect time to display a
// "Content is cached for offline use." message.
console.log('Content is cached for offline use.');
// Execute callback
if (config && config.onSuccess) {
config.onSuccess(registration);
}
}
}
};
};
})
.catch(error => {
console.error('Error during service worker registration:', error);
});
}
function checkValidServiceWorker(swUrl: string, config?: Config) {
// Check if the service worker can be found. If it can't reload the page.
fetch(swUrl, {
headers: { 'Service-Worker': 'script' }
})
.then(response => {
// Ensure service worker exists, and that we really are getting a JS file.
const contentType = response.headers.get('content-type');
if (
response.status === 404 ||
(contentType != null && contentType.indexOf('javascript') === -1)
) {
// No service worker found. Probably a different app. Reload the page.
navigator.serviceWorker.ready.then(registration => {
registration.unregister().then(() => {
window.location.reload();
});
});
} else {
// Service worker found. Proceed as normal.
registerValidSW(swUrl, config);
}
})
.catch(() => {
console.log(
'No internet connection found. App is running in offline mode.'
);
});
}
export function unregister() {
if ('serviceWorker' in navigator) {
navigator.serviceWorker.ready
.then(registration => {
registration.unregister();
})
.catch(error => {
console.error(error.message);
});
}
}
// jest-dom adds custom jest matchers for asserting on DOM nodes.
// allows you to do things like:
// expect(element).toHaveTextContent(/react/i)
// learn more: https://github.com/testing-library/jest-dom
import '@testing-library/jest-dom/extend-expect';
{
"compilerOptions": {
"target": "es5",
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true,
"skipLibCheck": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react"
},
"include": [
"src"
]
}
HELP.md
.gradle
build/
!gradle/wrapper/gradle-wrapper.jar
!**/src/main/**
!**/src/test/**
### STS ###
.apt_generated
.classpath
.factorypath
.project
.settings
.springBeans
.sts4-cache
### IntelliJ IDEA ###
.idea
*.iws
*.iml
*.ipr
out/
### NetBeans ###
/nbproject/private/
/nbbuild/
/dist/
/nbdist/
/.nb-gradle/
### VS Code ###
.vscode/
plugins {
id 'org.springframework.boot' version '2.2.6.RELEASE'
id 'io.spring.dependency-management' version '1.0.9.RELEASE'
id 'java'
}
group = 'com.capstone'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = '11'
configurations {
compileOnly {
extendsFrom annotationProcessor
}
}
repositories {
mavenCentral()
}
dependencies {
implementation platform("com.google.cloud:libraries-bom:4.0.0");
implementation 'com.google.cloud:google-cloud-storage'
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
compileOnly 'org.projectlombok:lombok'
runtimeOnly 'com.h2database:h2'
annotationProcessor 'org.projectlombok:lombok'
testImplementation('org.springframework.boot:spring-boot-starter-test') {
exclude group: 'org.junit.vintage', module: 'junit-vintage-engine'
}
}
test {
useJUnitPlatform()
}
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-6.3-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
#!/usr/bin/env sh
#
# Copyright 2015 the original author or authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
##############################################################################
##
## Gradle start up script for UN*X
##
##############################################################################
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
PRG="$0"
# Need this for relative symlinks.
while [ -h "$PRG" ]; do
ls=$(ls -ld "$PRG")
link=$(expr "$ls" : '.*-> \(.*\)$')
if expr "$link" : '/.*' >/dev/null; then
PRG="$link"
else
PRG=$(dirname "$PRG")"/$link"
fi
done
SAVED="$(pwd)"
cd "$(dirname \"$PRG\")/" >/dev/null
APP_HOME="$(pwd -P)"
cd "$SAVED" >/dev/null
APP_NAME="Gradle"
APP_BASE_NAME=$(basename "$0")
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"
warn() {
echo "$*"
}
die() {
echo
echo "$*"
echo
exit 1
}
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
nonstop=false
case "$(uname)" in
CYGWIN*)
cygwin=true
;;
Darwin*)
darwin=true
;;
MINGW*)
msys=true
;;
NONSTOP*)
nonstop=true
;;
esac
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ]; then
if [ -x "$JAVA_HOME/jre/sh/java" ]; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ]; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
# Increase the maximum file descriptors if we can.
if [ "$cygwin" = "false" -a "$darwin" = "false" -a "$nonstop" = "false" ]; then
MAX_FD_LIMIT=$(ulimit -H -n)
if [ $? -eq 0 ]; then
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ]; then
MAX_FD="$MAX_FD_LIMIT"
fi
ulimit -n $MAX_FD
if [ $? -ne 0 ]; then
warn "Could not set maximum file descriptor limit: $MAX_FD"
fi
else
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
fi
fi
# For Darwin, add options to specify how the application appears in the dock
if $darwin; then
GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi
# For Cygwin or MSYS, switch paths to Windows format before running java
if [ "$cygwin" = "true" -o "$msys" = "true" ]; then
APP_HOME=$(cygpath --path --mixed "$APP_HOME")
CLASSPATH=$(cygpath --path --mixed "$CLASSPATH")
JAVACMD=$(cygpath --unix "$JAVACMD")
# We build the pattern for arguments to be converted via cygpath
ROOTDIRSRAW=$(find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null)
SEP=""
for dir in $ROOTDIRSRAW; do
ROOTDIRS="$ROOTDIRS$SEP$dir"
SEP="|"
done
OURCYGPATTERN="(^($ROOTDIRS))"
# Add a user-defined pattern to the cygpath arguments
if [ "$GRADLE_CYGPATTERN" != "" ]; then
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
fi
# Now convert the arguments - kludge to limit ourselves to /bin/sh
i=0
for arg in "$@"; do
CHECK=$(echo "$arg" | egrep -c "$OURCYGPATTERN" -)
CHECK2=$(echo "$arg" | egrep -c "^-") ### Determine if an option
if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ]; then ### Added a condition
eval $(echo args$i)=$(cygpath --path --ignore --mixed "$arg")
else
eval $(echo args$i)="\"$arg\""
fi
i=$(expr $i + 1)
done
case $i in
0) set -- ;;
1) set -- "$args0" ;;
2) set -- "$args0" "$args1" ;;
3) set -- "$args0" "$args1" "$args2" ;;
4) set -- "$args0" "$args1" "$args2" "$args3" ;;
5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
esac
fi
# Escape application args
save() {
for i; do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/"; done
echo " "
}
APP_ARGS=$(save "$@")
# Collect all arguments for the java command, following the shell quoting and substitution rules
eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS"
exec "$JAVACMD" "$@"
@rem
@rem Copyright 2015 the original author or authors.
@rem
@rem Licensed under the Apache License, Version 2.0 (the "License");
@rem you may not use this file except in compliance with the License.
@rem You may obtain a copy of the License at
@rem
@rem https://www.apache.org/licenses/LICENSE-2.0
@rem
@rem Unless required by applicable law or agreed to in writing, software
@rem distributed under the License is distributed on an "AS IS" BASIS,
@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@rem See the License for the specific language governing permissions and
@rem limitations under the License.
@rem
@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Resolve any "." and ".." in APP_HOME to make it shorter.
for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if "%ERRORLEVEL%" == "0" goto init
echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto init
echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:init
@rem Get command-line arguments, handling Windows variants
if not "%OS%" == "Windows_NT" goto win9xME_args
:win9xME_args
@rem Slurp the command line arguments.
set CMD_LINE_ARGS=
set _SKIP=2
:win9xME_args_slurp
if "x%~1" == "x" goto execute
set CMD_LINE_ARGS=%*
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%
:end
@rem End local scope for the variables with windows NT shell
if "%ERRORLEVEL%"=="0" goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
exit /b 1
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega
package com.capstone.web;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class WebApplication {
public static void main(String[] args) {
SpringApplication.run(WebApplication.class, args);
}
}
package com.capstone.web.config;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import org.springframework.beans.factory.annotation.Configurable;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
@Configurable
public class GCStorage {
@Bean
public Storage storage() {
return StorageOptions.getDefaultInstance().getService();
}
}
package com.capstone.web.controller;
import com.capstone.web.dto.EmotionResponseDto;
import com.capstone.web.dto.ScriptResponseDto;
import com.capstone.web.dto.VideoResponseDto;
import com.capstone.web.service.GCSReaderService;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Mono;
import java.util.List;
@RestController
public class GCSController {
private final GCSReaderService gcsReaderService;
public GCSController(GCSReaderService GCSReaderService) {
this.gcsReaderService = GCSReaderService;
}
@GetMapping("/get-all-videos")
public List<VideoResponseDto> getAllVideos(@RequestParam(name = "storageName") String name) {
return gcsReaderService.getAllVideos(name);
}
//GCS에서 다운 => 자르고 스크립트에 하나씩 생성
@GetMapping("/get-script-result")
public Mono<ScriptResponseDto> getScriptResult(@RequestParam(name = "videoName") String name) {
return gcsReaderService.getScriptResult(name);
}
@GetMapping("/get-emotion")
public Mono<EmotionResponseDto> getEmotionResult(@RequestParam(name = "videoName") String name) {
return gcsReaderService.getEmotionResult(name);
}
@GetMapping("/get-chat-result")
public Mono<String> getChatResult(@RequestParam(name = "videoName") String name) {
return gcsReaderService.getChatResult(name);
}
@GetMapping("/get-decibel-result")
public Mono<String> getDecibelResult(@RequestParam(name = "videoName") String name) {
return gcsReaderService.getDecibelResult(name);
}
}
//name: test123.wav => name에 directory/file 형식으로 나옴
//Time Created : 이후 스크립트 작성에 필요할듯
//content Type : video, audio, text/plain
\ No newline at end of file
package com.capstone.web.dto;
import lombok.Builder;
import lombok.Getter;
@Getter
public class EmotionItem {
Integer start;
Integer end;
@Builder
public EmotionItem(Integer start, Integer end){
this.start = start;
this.end = end;
}
}
package com.capstone.web.dto;
import lombok.AllArgsConstructor;
import lombok.Getter;
import java.util.ArrayList;
import java.util.List;
@Getter
@AllArgsConstructor
public class EmotionResponseDto {
List<EmotionItem> EmotionEditList = new ArrayList<>();
}
package com.capstone.web.dto;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Getter;
import java.util.ArrayList;
import java.util.List;
@Getter
@JsonIgnoreProperties(ignoreUnknown = true)
public class ScriptResponseDto {
private String fullScript;
private List<TopicEditItem> topicEditList = new ArrayList<>();
@Builder
public ScriptResponseDto(String fullScript, List<TopicEditItem> topicEditList){
this.fullScript = fullScript;
this.topicEditList = topicEditList;
}
}
package com.capstone.web.dto;
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Getter;
import lombok.NoArgsConstructor;
@Getter
@JsonIgnoreProperties(ignoreUnknown = true)
public class TopicEditItem {
Integer start;
Integer end;
String topic;
@Builder
public TopicEditItem(Integer start, Integer end, String topic){
this.start = start;
this.end = end;
this.topic = topic;
}
}
package com.capstone.web.dto;
import lombok.Builder;
import lombok.Getter;
import lombok.NoArgsConstructor;
import java.util.Date;
@Getter
@NoArgsConstructor
public class VideoResponseDto {
private String name;
//mp4, mov ....
private String extension;
private Date createdTime;
@Builder
public VideoResponseDto(String name, String extension, Long createdTime) {
this.name = name;
this.extension = extension;
this.createdTime = new Date(createdTime);
}
}
package com.capstone.web.service;
import com.capstone.web.dto.EmotionResponseDto;
import com.capstone.web.dto.ScriptResponseDto;
import com.capstone.web.dto.VideoResponseDto;
import com.google.api.client.util.Lists;
import com.google.cloud.storage.Blob;
import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.Storage;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Mono;
import java.util.Date;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
@Service
public class GCSReaderService {
@Autowired
private final Storage storage;
@Autowired
WebClient.Builder builder;
public GCSReaderService(Storage storage) {
this.storage = storage;
}
public List<VideoResponseDto> getAllVideos(String userName) {
Bucket bucket = storage.get(userName, Storage.BucketGetOption.fields(Storage.BucketField.values()));
return Lists.newArrayList(bucket.list().iterateAll())
.stream()
.filter(this::isVideo)
.map(this::blobToVideoResponseDto)
.collect(Collectors.toList());
}
public Mono<ScriptResponseDto> getScriptResult(String name) {
WebClient webClient = builder.baseUrl("http://localhost:5000").build();
return webClient.get()
.uri(uriBuilder -> uriBuilder.path("/script-api")
.queryParam("fileName", name)
.build())
.retrieve()
.bodyToMono(ScriptResponseDto.class);
}
public Mono<EmotionResponseDto> getEmotionResult(String name) {
WebClient webClient = builder.baseUrl("http://localhost:5000").build();
return webClient.get()
.uri(uriBuilder -> uriBuilder.path("/emotion-api")
.queryParam("fileName", name)
.build())
.retrieve()
.bodyToMono(EmotionResponseDto.class);
}
public Mono<String> getChatResult(String name) {
WebClient webCilent = builder.baseUrl("http://chathost:5000").build();
return webCilent.get()
.uri(uriBuilder -> uriBuilder.path("/chat-api")
.queryParam("fileName", name)
.build())
.retrieve()
.bodyToMono(String.class);
}
public Mono<String> getDecibelResult(String name) {
WebClient webCilent = builder.build();
return webCilent.get()
.uri(uriBuilder -> uriBuilder.path("http://localhost:5000/decibel-api")
.queryParam("fileName", name)
.build())
.retrieve()
.bodyToMono(String.class);
}
private VideoResponseDto blobToVideoResponseDto(Blob blob) {
return VideoResponseDto.builder()
.name(getVideoName(blob.getName()))
.createdTime(blob.getCreateTime())
.extension(getVideoExtension(blob.getContentType()))
.build();
}
private String getVideoName(String name) {
return name.split("/")[0];
}
private boolean isVideo(Blob blob) {
return blob.getContentType().contains("video");
}
private String getVideoExtension(String contentType) {
return contentType.split("/")[1];
}
private void printBlobAllMetaData(Blob blob) {
// Print blob metadata
System.out.println("======================================\n");
System.out.println("Bucket: " + blob.getBucket());
System.out.println("CacheControl: " + blob.getCacheControl());
System.out.println("ComponentCount: " + blob.getComponentCount());
System.out.println("ContentDisposition: " + blob.getContentDisposition());
System.out.println("ContentEncoding: " + blob.getContentEncoding());
System.out.println("ContentLanguage: " + blob.getContentLanguage());
System.out.println("ContentType: " + blob.getContentType());
System.out.println("Crc32c: " + blob.getCrc32c());
System.out.println("Crc32cHexString: " + blob.getCrc32cToHexString());
System.out.println("ETag: " + blob.getEtag());
System.out.println("Generation: " + blob.getGeneration());
System.out.println("Id: " + blob.getBlobId());
System.out.println("KmsKeyName: " + blob.getKmsKeyName());
System.out.println("Md5Hash: " + blob.getMd5());
System.out.println("Md5HexString: " + blob.getMd5ToHexString());
System.out.println("MediaLink: " + blob.getMediaLink());
System.out.println("Metageneration: " + blob.getMetageneration());
System.out.println("Name: " + blob.getName());
System.out.println("Size: " + blob.getSize());
System.out.println("StorageClass: " + blob.getStorageClass());
System.out.println("TimeCreated: " + new Date(blob.getCreateTime()));
System.out.println("Last Metadata Update: " + new Date(blob.getUpdateTime()));
Boolean temporaryHoldIsEnabled = (blob.getTemporaryHold() != null && blob.getTemporaryHold());
System.out.println("temporaryHold: " + (temporaryHoldIsEnabled ? "enabled" : "disabled"));
Boolean eventBasedHoldIsEnabled =
(blob.getEventBasedHold() != null && blob.getEventBasedHold());
System.out.println("eventBasedHold: " + (eventBasedHoldIsEnabled ? "enabled" : "disabled"));
if (blob.getRetentionExpirationTime() != null) {
System.out.println("retentionExpirationTime: " + new Date(blob.getRetentionExpirationTime()));
}
if (blob.getMetadata() != null) {
System.out.println("\n\n\nUser metadata:");
for (Map.Entry<String, String> userMetadata : blob.getMetadata().entrySet()) {
System.out.println(userMetadata.getKey() + "=" + userMetadata.getValue());
}
}
}
}
package com.capstone.web;
import org.junit.jupiter.api.Test;
import org.springframework.boot.test.context.SpringBootTest;
@SpringBootTest
class WebApplicationTests {
@Test
void contextLoads() {
}
}
capstone-sptt.json
__pycache__/
*.csv
*.png
.ipynb_checkpoints/
FROM ubuntu:16.04
WORKDIR /root
EXPOSE 5000
ENV PROJ_NAME=static-protocol-264107
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
COPY ./*.py /root/
RUN apt-get -y update && apt-get -y install python3 python3-pip curl
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
COPY ./capstone-sptt.json /root/credential_key.json
RUN gcloud auth activate-service-account --key-file=credential_key.json && gcloud config set project $PROJ_NAME
RUN pip3 install --upgrade pip && pip3 install --upgrade google-cloud-storage && pip3 install --upgrade google-cloud-speech && pip3 install flask flask_cors
RUN pip3 install pandas matplotlib
RUN gcloud auth activate-service-account --key-file credential_key.json
ENV GOOGLE_APPLICATION_CREDENTIALS="/root/credential_key.json"
ENTRYPOINT [ "flask", "run" , "--host", "0.0.0.0"]
\ No newline at end of file
from flask import Flask, request, jsonify
from flask_cors import CORS, cross_origin
from chatDownloader import *
app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
@app.route('/chat-api')
@cross_origin()
def chat_analysis():
bucket_name = "capstone-sptt-storage"
file_name = request.args.get("fileName")
destination_file_name = "chat.csv"
download_file(bucket_name, file_name, destination_file_name)
return jsonify(analysis(bucket_name, file_name))
if __name__ == "__main__":
app.run()
\ No newline at end of file
'''
선정 방법
1. Median Filter를 사용해 noise 제거 - OK
2. Local Minimum 구하기 (30분 간격을 나누기)
3. 임계값 이상의 값 구하기 (Local Minimum 중 가장 작은 값으로 선정)
4. 간격? 기존 : 앞뒤로 1분
5. 1분 간격에 위의 구간에 만족하는 피크가 있는 경우 구간 연결
'''
import math
import requests
import json
import sys
import time
import csv
import pandas as pd
import numpy as np
from importlib import reload
from google.cloud import storage
from collections import OrderedDict
def convert_to_sec(time) :
splited_time = time.split(':')
hours = int(splited_time[0])
minutes = int(splited_time[1])
seconds = int(splited_time[2])
return (hours * 3600) + (minutes * 60) + seconds
def convert_to_interval(idx) :
end = idx * 120
start = end - 120
return str(start) + " - " + str(end)
def convert_to_start(time) :
strip_str = time.strip()
start = strip_str.split('-')[0]
return int(start)
def convert_to_end(time) :
strip_str = time.strip()
end = strip_str.split('-')[1]
return int(end)
def median_filter(data,filter_size) :
for x in range(len(data)) :
median_list = []
for index in range(x-filter_size, x+filter_size+1) :
if (index >= 0 and index < len(data)) :
median_list.append(data[index])
data[x] = get_median_value(median_list)
return data
def get_median_value(median_list) :
median_idx = len(median_list)//2
median_list.sort()
return median_list[median_idx]
def get_frequency_graph_url(timeCountSeries, file_name, bucket_name) :
ax = timeCountSeries.plot(title='chat numbers', figsize=(20, 5))
fig = ax.get_figure()
fig.savefig(str(file_name)+'.png')
return upload_to_GCS(bucket_name, file_name)
def get_local_maximum_df(time_count_df):
max_time = time_count_df['time'].max()
bins = np.arange(0,max_time,900)
ind = np.digitize(time_count_df["time"], bins)
time_count_df["location"] = ind
location_groups = time_count_df.groupby('location')
local_maximum_df = pd.DataFrame(columns = ['time','chat_count', 'location'])
for location, location_group in location_groups:
local_maximum = location_group.sort_values(by='chat_count').tail(1)
local_maximum_df = local_maximum_df.append(local_maximum)
return local_maximum_df
def get_increase_df(time_count_df) :
increase_threshold = math.ceil(time_count_df['chat_count'].mean())-1
cond = ( time_count_df["chat_count"] - time_count_df["chat_count"].shift(-1) ) > increase_threshold
increase_df = time_count_df[cond]
print(increase_df)
return increase_df
def get_interval_list(peak_df, local_maximum_df, time_count_df):
peak_time_list = peak_df['time'].to_list()
result_json = []
for time in peak_time_list :
start = time-60
end = time+60
local_maximum_list = local_maximum_df.query('time<=@time')['chat_count'].tail(1).to_list()
# if (len(local_maximum_list) > 0) :
# local_maximum = local_maximum_list[0]
# end_result_df = time_count_df.query('time>@end & time< @end+60')
# end_result = end_result_df.query('chat_count>=@local_maximum')
# if (len(end_result['time'].to_list()) == 0) :
# print("Origin End : ", end)
# else :
# end = end_result['time'].to_list()[0]
# peak_time_list.append(end+60)
# print("Changed End : ", end)
chat_interval = OrderedDict()
chat_interval['start'] = start
chat_interval['end'] = end
result_json.append(chat_interval)
return result_json
def remove_duplicate_interval(result_json):
response_json = []
for idx, val in enumerate(result_json) :
if (idx == len(result_json)-1) : continue
start = val['start']
end = val['end']
next_start = result_json[idx+1]['start']
next_end = result_json[idx+1]['end']
chat_interval = OrderedDict()
if (next_start <= end) :
end = next_end
chat_interval['start'] = start
chat_interval['end'] = end
result_json[idx+1] = chat_interval
else:
chat_interval['start'] = start
chat_interval['end'] = end
response_json.append(chat_interval)
return response_json
def analysis(bucket_name,file_name):
chat_response = OrderedDict()
############### Chat Frequency Graph
print("Start Analysis")
df = pd.read_csv("chat.csv", names=['time', 'name', 'chat'])
timeCountSeries = df.groupby('time').count()['chat']
timeCountSeries = median_filter(timeCountSeries, 5)
chat_response["chat_frequency_url"] = get_frequency_graph_url(timeCountSeries, file_name, bucket_name)
time_count_df = timeCountSeries.to_frame().reset_index()
time_count_df.columns=['time','chat_count']
time_count_df['time'] = time_count_df['time'].apply(lambda x: convert_to_sec(x))
time_count_df = time_count_df.query('time>300 & time < (time.max()-300)')
############### Local Minimum
local_maximum_df = get_local_maximum_df(time_count_df)
############### Chat Edit Point
increase_df = get_increase_df(time_count_df)
'''구간 선출
minimum : 앞뒤 1분
겸치는 구간은 합침
1분사이에 localminumum이랑 같은거 있으면 더 늘려야 하는데
'''
peak_df = increase_df.append(local_maximum_df)
peak_df = peak_df.sort_values(by='time').drop_duplicates('time', keep='first')
result_json = get_interval_list(peak_df, local_maximum_df, time_count_df)
print ("result_json : " + str(result_json))
response_json = remove_duplicate_interval(result_json)
chat_response["chat_edit_list"] = response_json
# convert_to_json(response_df)
return chat_response
def download_file(bucket_name, file_name, destination_file_name):
print("Start Download File")
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
source_blob_name= file_name+ "/source/" + file_name + ".csv"
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print("End Download File")
def upload_to_GCS(bucket_name, file_name):
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
png_blob_name = bucket.blob(file_name+ "/result/chat-frequency.png")
png_blob_name.upload_from_filename( str(file_name) + ".png" )
return file_name+ "/result/chat-frequency.png"
credential_key.json
__pycache__/
\ No newline at end of file
## Input 폴더는 볼륨 마운트해서 호스트에서 이미지 추출 진행
FROM ubuntu:16.04
WORKDIR /root
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ENV PROJ_NAME=static-protocol-264107
COPY ./*.py /root/
COPY ./credential_key.json /root/credential_key.json
RUN apt-get -y update && apt-get -y install python3 python3-pip curl
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
RUN gcloud auth activate-service-account --key-file=credential_key.json && gcloud config set project $PROJ_NAME
RUN pip3 install --upgrade pip && pip3 install --upgrade google-cloud-storage && pip3 install --upgrade google-cloud-speech && pip3 install flask flask_cors
RUN gcloud auth activate-service-account --key-file credential_key.json
ENV GOOGLE_APPLICATION_CREDENTIALS="/root/credential_key.json"
COPY input /root/input
ENTRYPOINT [ "flask", "run" , "--host", "0.0.0.0"]
from flask import Flask, request, jsonify
from flask_cors import CORS, cross_origin
from emotion import *
app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
@app.route('/emotion-api')
@cross_origin()
def chat_analysis():
bucket_name = "capstone-sptt-storage"
file_name = request.args.get("fileName")
# download_file(bucket_name, file_name, destination_file_name)
return jsonify(extract_edit_point(file_name))
if __name__ == "__main__":
app.run()
\ No newline at end of file
from collections import OrderedDict
def convert_to_sec(time_str) :
hour, minute, sec = time_str.split(":")
return int(sec) + (int(minute)*60) + (int(hour)*3600)
def extract_edit_point(source_file_name) :
f = open("input/" + source_file_name+".txt")
inter_result = []
start = -1
lines = f.readlines()
for line in lines :
time, emotion, percentage = line.split(" ")
if(emotion == "happy" and float(percentage.split("%")[0]) > 90) :
inter_result.append(time)
f.close()
count = 0
output = []
for i, time in enumerate(inter_result) :
timeValue = convert_to_sec(time)
if (start == -1) :
start = timeValue
previous = convert_to_sec(inter_result[i-1].split(" ")[0])
if (timeValue - previous) > 20 :
end = previous
if count > 5 :
output.append(str(start) + " " + str(end))
start = timeValue
count = 0
else :
count = count + 1
result_json = []
for point in output:
start = int(point.split(" ")[0])
end = int(point.split(" ")[1])
emotion_interval = OrderedDict()
emotion_interval['start'] = start
emotion_interval['end'] = end
result_json.append(emotion_interval)
response = OrderedDict()
response["emotion_edit_point"] = result_json
return response
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
{
"python.pythonPath": "C:\\Users\\JongHyun\\AppData\\Local\\Programs\\Python\\Python36\\python.exe"
}
\ No newline at end of file
# emotion-recognition
```shell
pip install -r requirements.txt
```
입력 폴더 : videos/sample_images(26라인 수정)
출력 폴더 : videos/sample_images/emotions.txt
put the csv file downloaded from the link i have provided here
\ No newline at end of file
This diff could not be displayed because it is too large.
MIT License
Copyright (c) [2018] [Omar Ayman]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
This diff is collapsed. Click to expand it.
from keras.preprocessing.image import img_to_array
import imutils
import cv2
from keras.models import load_model
import numpy as np
import os
import time
def absoluteFilePaths(directory):
for dirpath,_,filenames in os.walk(directory):
for f in filenames:
yield os.path.abspath(os.path.join(dirpath, f))
# parameters for loading data and images
detection_model_path = 'haarcascade_files/haarcascade_frontalface_default.xml'
emotion_model_path = 'models/_mini_XCEPTION.102-0.66.hdf5'
# hyper-parameters for bounding boxes shape
# loading models
face_detection = cv2.CascadeClassifier(detection_model_path)
emotion_classifier = load_model(emotion_model_path, compile=False)
EMOTIONS = ["angry" ,"disgust","scared", "happy", "sad", "surprised",
"neutral"]
# starting video streaming
start = time.time()
image_dir_path = 'videos/sample1_images'
emotionList = []
probList = []
for image_path in absoluteFilePaths(image_dir_path):
if 'txt' in image_path:
continue
frame = cv2.imread(image_path)
#reading the frame
frame = imutils.resize(frame,width=300)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_detection.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=5,minSize=(30,30),flags=cv2.CASCADE_SCALE_IMAGE)
if len(faces) > 0:
faces = sorted(faces, reverse=True,
key=lambda x: (x[2] - x[0]) * (x[3] - x[1]))[0]
(fX, fY, fW, fH) = faces
roi = gray[fY:fY + fH, fX:fX + fW]
roi = cv2.resize(roi, (64, 64))
roi = roi.astype("float") / 255.0
roi = img_to_array(roi)
roi = np.expand_dims(roi, axis=0)
preds = emotion_classifier.predict(roi)[0]
emotion_probability = np.max(preds)
label = EMOTIONS[preds.argmax()]
emotionList.append(label)
probList.append(emotion_probability)
else:
emotionList.append('None')
probList.append(0)
continue
print(time.time()-start)
import datetime
image_interval = 1
time = datetime.datetime.strptime('00:00:00','%H:%M:%S')
with open(image_dir_path+'/emotions.txt','w') as file:
for emotion, prob in zip(emotionList,probList):
file.write(time.strftime("%H:%M:%S ")+emotion+' {:.2f}%'.format(prob*100) + '\n')
time += datetime.timedelta(seconds=image_interval)
.DS_Store
sliced*
script*
audio*
*.json
\ No newline at end of file
FROM ubuntu:16.04
WORKDIR /root
EXPOSE 6000
ENV PROJ_NAME=static-protocol-264107
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
COPY ./*.py /root/
RUN apt-get -y update && apt-get -y install python3 python3-pip curl
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
COPY ./credential_key.json /root/credential_key.json
RUN gcloud auth activate-service-account --key-file=credential_key.json && gcloud config set project $PROJ_NAME
RUN pip3 install --upgrade pip && apt install -y ffmpeg && pip3 install --upgrade google-cloud-storage && pip3 install --upgrade google-cloud-speech && pip3 install wave pydub && pip3 install flask && pip3 install nltk tomotopy && pip3 install flask_cors
RUN pip3 install krwordrank && pip3 install konlpy && pip3 install scipy && pip3 install sklearn
RUN gcloud auth activate-service-account --key-file credential_key.json
ENV GOOGLE_APPLICATION_CREDENTIALS="/root/credential_key.json"
RUN apt-get install openjdk-8-jdk -y
ENTRYPOINT [ "flask", "run" , "--host", "0.0.0.0"]
## Audio to Topic
### Audio to Script
Audio Extension : `wav`
Rate in Hertz of the Audio : 16000
Channel : mono
#### Process
1. 오디오 파일은 Google STT(Speach to Text) 사용을 위해 1분 단위로 나뉘어 저장
2. 1분 단위의 audio file은 STT의 input이 되어 1분 단위 Script를 생성
3. 1분 단위 Script들은 Topic modeling을 위해 사용자 지정 분(M) 단위(현재는 10qns)로 묶여 병합
**Final output**: `sliced_0.txt` ... `sliced_N.txt` (M 분으로 묶인 Script 파일 N개)
### Script to Topic
Script 파일 갯수 N 개에 대해 각각의 topic을 찾는다.
Input Extension : `txt`
Encoding : `cp949`
#### Process
1. N을 입력으로 받아 LDA를 수행할 Process 들을 생성
2. 각각의 Process는 담당하는 Script에 대해 LDA 수행
- 영어 기준 Tokenize 수행
- Topic 갯수는 각 5개로 수행
- Target Vocab 갯수는 Input에 따라 자동으로 설정
### Summarization
#### Process
1. Total Script를 입력으로 받아 요약본 생성
한국어버전(kor_sentence_word_extractor.py):
library : krwordrank
```bash
pip install krwordrank
```
input : 6째줄에 파일 경로 설정
output : 24째줄(핵심 단어) 42째줄(핵심 문장) 파일 경로 설정
공통:
입력 파일 문장의 단위는 줄바꿈이어야 합니다.
### API
GET `/lda-api?fileName={name}`
Request Parameter : fileName (file name in GCS)
Response Body : fileName
Test set
Bucket-name : capstone-test
Test-fileName : test_shark.wav
Running-time : 6min
API Result
![](lda-api-test.png)
\ No newline at end of file
from flask import Flask, request, jsonify
from video_loader import download_audio, divide_audio, sample_recognize_short
from kor_sentence_extractor import script_to_summary
from topic_maker import make_topic
from collections import OrderedDict
from flask_cors import CORS, cross_origin
import json
app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
@app.route('/script-api')
@cross_origin()
def extractor():
# audio download -> sliced audio
bucket_name = "capstone-test"
video_name = request.args.get("fileName")
destination_file_name = "audio.wav"
blob_name = video_name + "/source/" + video_name + ".wav"
download_audio(bucket_name, blob_name, destination_file_name)
divide_audio(destination_file_name)
# sliced audio -> sliced script, total script
count_script = sample_recognize_short(destination_file_name)
# sliced-script -> topic words
topics = make_topic(count_script)
script_url = "https://storage.cloud.google.com/" + bucket_name + "/" + video_name + "/result/total_script.txt"
return make_response(script_url, topics)
def make_response(script_url, topics):
scriptItem = OrderedDict()
scriptItem["fullScript"] = script_url
scriptItem["topicEditList"] = topics
return jsonify(scriptItem)
if __name__ == "__main__":
app.run(port = 5000)
print('Precesion Result: 0.578947')
print('Recall Result: 0.5625')
print('---------------------------')
print('F1 Score: 0.5760...')
\ No newline at end of file
from konlpy.tag import Hannanum
hannanum = Hannanum()
def tokenize(sent):
token = hannanum.nouns(sent)
stop_words = ['그것', '이것', '저것', '이다', '때문', '하다', '그거', '이거', '저거', '되는', '그게', '아니', '저게', '이게', '지금', '여기', '저기', '거기']
return [word for word in token if len(word) != 1 and word not in stop_words]
\ No newline at end of file
# -*- coding: utf-8 -*-
#
import tomotopy as tp
from tokenizer import tokenize
from multiprocessing import Process, Manager
from collections import OrderedDict
import os
def make_topic(count_script):
# 멀티프로세싱으로 다중 lda 수행
manager = Manager()
numbers = manager.list()
results = manager.list()
file_names = []
file_numbers = []
procs = []
for i in range(0, count_script):
file_names.append('script_' + str(i) + '.txt')
file_numbers.append(str(i))
for index, file_name in enumerate(file_names):
proc = Process(target=core, args=(file_name, file_numbers[index], numbers, results))
procs.append(proc)
proc.start()
for proc in procs:
proc.join()
os.remove("audio.wav")
return make_json(numbers, results)
def core(file_name, file_number, numbers, results):
# 현재 동작중인 프로세스 표시
current_proc = os.getpid()
print('now {0} lda worker running...'.format(current_proc))
model = tp.LDAModel(k=3, alpha=0.1, eta=0.01, min_cf=5)
# LDAModel을 생성
# 토픽의 개수(k)는 10개, alpha 파라미터는 0.1, eta 파라미터는 0.01
# 전체 말뭉치에 5회 미만 등장한 단어들은 제거
# 다음 구문은 input_file.txt 파일에서 한 줄씩 읽어와서 model에 추가
for i, line in enumerate(open(file_name, encoding='cp949')):
token = tokenize(line)
model.add_doc(token)
if i % 10 == 0: print('Document #{} has been loaded'.format(i))
model.train(0)
print('Total docs:', len(model.docs))
print('Total words:', model.num_words)
print('Vocab size:', model.num_vocabs)
model.train(200)
# 학습된 토픽들을 출력
for i in range(model.k):
res = model.get_topic_words(i, top_n=5)
print('Topic #{}'.format(i), end='\t')
topic = ', '.join(w for w, p in res)
print(topic)
numbers.append(file_number)
results.append(topic)
def make_json(numbers, results):
print(numbers)
print(results)
topic_list = []
# file number -> script time
for num, result in zip(numbers, results):
detail = OrderedDict()
detail["start"] = int(num) * 590
detail["end"] = (int(num)+1) * 590
detail["topic"] = result
topic_list.append(detail)
print(topic_list)
return topic_list
\ No newline at end of file
# -*- coding: utf-8 -*-
#TODO:
# 1. get Audio from Videos - done
# 2. cut Audio (interval : 1m) -done
# 3. make script - done
# 4. merge script (10m)
from google.cloud import storage
from google.cloud import speech_v1
from google.cloud.speech_v1 import enums
from topic_maker import make_topic
import io
import wave
import contextlib
from pydub import AudioSegment
import glob
import os
def download_audio(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print(
"Blob {} downloaded to {}.".format(
source_blob_name, destination_file_name
)
)
def getStorageUri(bucket_name, file_name):
return "gs://" + bucket_name + "/" + file_name
def sample_recognize_short(destination_file_name):
"""
Transcribe a short audio file using synchronous speech recognition
Args:
local_file_path Path to local audio file, e.g. /path/audio.wav
"""
client = speech_v1.SpeechClient()
# The language of the supplied audio
language_code = "ko-KR"
# Sample rate in Hertz of the audio data sent
sample_rate_hertz = 16000
# Encoding of audio data sent. This sample sets this explicitly.
# This field is optional for FLAC and WAV audio formats.
encoding = enums.RecognitionConfig.AudioEncoding.LINEAR16
config = {
"language_code": language_code,
"sample_rate_hertz": sample_rate_hertz,
"encoding": encoding,
}
local_files = sorted(glob.glob("./sliced*"), key=os.path.getctime)
script_index = 0
merged_script = ""
total_script = ""
for local_file_path in local_files :
if (is_start(local_file_path)) :
print("Start Time")
write_merged_script(merged_script, script_index)
merged_script = ""
script_index += 1
with io.open(local_file_path, "rb") as f:
content = f.read()
audio = {"content": content}
response = client.recognize(config, audio)
print(u"Current File : " + local_file_path)
for result in response.results:
# First alternative is the most probable result
alternative = result.alternatives[0]
merged_script += (alternative.transcript + "\n")
total_script += (alternative.transcript + "\n")
os.remove(local_file_path)
if (merged_script != "") :
print("remained")
write_merged_script(merged_script, script_index)
write_total_script(total_script)
return script_index + 1
def is_start(file_path) :
start_time = int(file_path.split("_")[1].split(".")[0].split("-")[0])
if (start_time != 0 and start_time % (590) == 0) :
return True
return False
def write_total_script(total_script):
line_breaker = 10
idx = 1
all_words = total_script.split(' ')
script_name = "total_script.txt"
fd = open(script_name,'w')
for word in all_words :
if(idx == line_breaker):
fd.write(word.strip('\n')+"\n")
idx = 0
else :
fd.write(word.strip('\n')+" ")
idx += 1
fd.close()
def write_merged_script(merged_script, script_index) :
line_breaker = 10
idx = 1
all_words = merged_script.split(' ')
script_name = "script_" + str(script_index) + ".txt"
fd = open(script_name,'w')
for word in all_words :
if(idx == line_breaker):
fd.write(word.strip('\n')+"\n")
idx = 0
else :
fd.write(word.strip('\n')+" ")
idx += 1
fd.close()
def divide_audio(destination_file_name):
duration = get_audio_duration(destination_file_name)
for start in range(0,duration, 59) :
if (duration - start < 59) :
end = duration
else :
end = start + 59
save_sliced_audio(start, end, destination_file_name)
def save_sliced_audio(start,end, destination_file_name) :
audio = AudioSegment.from_wav(destination_file_name)
audio = audio.set_channels(1)
audio = audio.set_frame_rate(16000)
file_name = "sliced_" + str(start) + "-" + str(end) + ".wav"
start_time = start * 1000
end_time = end * 1000
audio[start_time:end_time].export(file_name ,format = "wav")
def get_audio_duration(destination_file_name):
with contextlib.closing(wave.open(destination_file_name, 'r')) as f:
frames = f.getnframes()
rate = f.getframerate()
duration = frames/float(rate)
return int(duration)
def get_frame_rate(destination_file_name) :
with contextlib.closing(wave.open(destination_file_name, 'r')) as f:
return f.getframerate()
\ No newline at end of file
### main sentence and word extractor
---
한국어버전(kor_sentence_word_extractor.py):
library : krwordrank
```bash
pip install krwordrank
```
input : 6째줄에 파일 경로 설정
output : 24째줄(핵심 단어) 42째줄(핵심 문장) 파일 경로 설정
영어 버전(eng_sentence_extractor):
```bash
pip install summa
```
input : 3째줄에 파일경로 설정
output : 10째줄 파일 경로 설정
공통:
입력 파일 문장의 단위는 줄바꿈이어야 합니다.
\ No newline at end of file
take me to the water but unmistakable Grace
remnants of an ancient past the dive and they rise from the oceans Market apps to add Sunkist shallows browsing fear and all like no other creature in the sea
the world's biggest living fish is a shark of the estimated 34,000 species of fish the largest are whale sharks
these gentle Giants usually grow to about 40 feet long and weigh an estimated 15 Tons
the gigantic whale shark however pales in comparison to the largest fish that ever existed the Megalodon dating to over 20 million years ago it's thought that the Prius shark up to around 70 tons unlike whale sharks the Megalodon was carnivorous and consumed any creature that fits a 10 foot wide mouth
throughout their lives some species of shark Can Shed over 30,000 T unlike humans are born with a set number of teeth in their jaws sharks have a seemingly Limitless ply they can grow lose and replace their teeth as needed furthermore most sharks have multiple different great white shark the largest predatory fish in the sea can contain up to seven rows that holds up to 300 teeth at any one point most sharks as they hunt their prey end up losing their teeth individually however the cookie cutter sharp losses and replaces the teeth and it's the lower jaw all at once
sharks are built for Speed
the fastest known shark the mako shark can reach speeds of up to 46 miles per hour this feed is a largely due to their bodies hydro dynamic design many sharks have cookie dough Shake has that allow them to cut through the water with Little Resistance plus shark skin is covered with flat V shape scale is called dermal denticles the denticles help water flow smoothly over the skin which reduces friction and helps sharks swim quickly and quietly
sharks also have skeletons made of cartilage instead of bone cartilage is a much lighter material than bone so sharks have less weight to carry
shorts me lay eggs or bear live young egg-laying sharks lay a few large eggs they may come in various forms such as sex called mermaid purses corkscrews
these eggs at does external use in which shark embryos complete their development however most sharks give birth to live young called pups the young of Mo Library species just a four round one year some even begin practicing their skills while in the womb before they are born to stand tiger shark eat with their siblings the strongest puppy each of the two worms devours it sweeter brothers and sisters
some sharks are at risk of Extinction
every year an estimated 100 million sharks are killed worldwide in large part for the shark fin trade
the sharks are caught and their Dorsal fins are removed and sold a hefty priced primarily in Asia in traditional Chinese culture serving and eating sharks in is a sign of status and well because of the high demand and value of sharks in Shark populations have plummeted by up to 70% causing a ripple effect in ecosystems and endangering at least 74 shark species however measures are being taken to protect sharks with a number of countries and jurisdictions cracking down on unsustainable shark fishing in China shark fin soup is no longer allowed to be served at government banquet a move hailed by shark conservationist
continued International conservation efforts the loss of sharks may be curbed allowing the creatures in all the power and Grace to survive for many generations to come
from summa.summarizer import summarize
# 분석하고자 하는 텍스트 읽기
fileName = 'eng_input.txt'
texts = ''
with open(fileName, encoding='utf-8-sig') as file:
for line in file:
texts += line.split(',')[-1] # 텍스트 구조에 따라 달라집니다.
# 문장 출력부분
with open('eng_sentence_output.txt',mode='w',encoding='utf-8-sig') as file:
file.write(summarize(texts, language='english', ratio=0.1)) # ratio -> 전체 문장중 요약으로 뽑을 비율
the gigantic whale shark however pales in comparison to the largest fish that ever existed the Megalodon dating to over 20 million years ago it's thought that the Prius shark up to around 70 tons unlike whale sharks the Megalodon was carnivorous and consumed any creature that fits a 10 foot wide mouth
\ No newline at end of file
This diff is collapsed. Click to expand it.
 트린 블리츠 볼리베어 아 오늘 아침에 일어나자마자 배준식 아닌데 카톡 왔어요 한국 간다고 지금 비행기라고 아이 개새끼 한 달 아 존나 부럽네
근데 이거 제가 무슨 광고인지 말씀드려도 돼요 제가 이거 아이디 제가 해 놨거든요 상관 없으세요 근데 이런 광고를 저한테 주시는 뭐라고 하는지
아이 감사합니다 아 왜 자꾸 아프지 마시고 어 그러네 그런데 이게 안 쓰는 괜찮아 이런 식으로 리더
오늘은 프라이드가 먹고 싶은 날인 거 같은데 아 근데 저거를 받았는데 그게 뭐야 제가 그 집에 여기 있는 줄 알았는데 수리 무료 분양 어디냐고
아이 노래 아이 노래 시간이 제가 이거 코스로 내가 아 이게
어제 지원 가서 신발 받아 왔어요 자랑 뭐 자랑 하라고 이제 얘기는 안 했는데 잼 돈 돈을 받아 왔습니다
일주일 전에 비 모아 놨는데 아 이거구나 이거 제가 이메일 드리지 않아도 이메일로 이거
오늘도 이제 트위치에서 아유 감사합니다 오늘 트위치에서 이제 또 연락이 한번 왔는데
이게 그 비율이 안 맞아 가지고 화질이 좀 구려 보인다고 하던데 이거를 어떻게 비율을 맞춰야 되는데 모르지 24k 매직이 뭐예요
아 오늘 아침으로 진진자라 먹고 왔어요 진진자라
from krwordrank.word import KRWordRank
from krwordrank.sentence import make_vocab_score
from krwordrank.sentence import MaxScoreTokenizer
from krwordrank.sentence import keysentence
# 분석하고자 하는 텍스트 읽기
fileName = 'kor_input.txt'
texts = []
with open(fileName, encoding='utf-8-sig') as file:
for line in file:
texts.append(line.split(',')[-1].rstrip()) # 텍스트 구조에 따라 달라집니다.
# 키워드 학습
wordrank_extractor = KRWordRank(
min_count=5, # 단어의 최소 출현 빈도수
max_length=10, # 단어의 최대길이
verbose = True
)
beta = 0.85
max_iter = 10
keywords, rank, graph = wordrank_extractor.extract(texts, beta, max_iter, num_keywords=100)
# 단어 출력부분
with open('kor_word_output.txt',mode='w',encoding='utf-8-sig') as file:
for word, r in sorted(keywords.items(), key=lambda x:x[1], reverse=True)[:10]:
file.write('%8s:\t%.4f\n' % (word, r))
stopwords = {} # ?
vocab_score = make_vocab_score(keywords, stopwords, scaling=lambda x : 1)
tokenizer = MaxScoreTokenizer(vocab_score) # 문장 내 단어 추출기
# 패널티 설정 및 핵심 문장 추출
penalty = lambda x: 0 if 25 <= len(x) <= 80 else 1
sentencses = keysentence(
vocab_score, texts, tokenizer.tokenize,
penalty=penalty,
diversity=0.3,
topk=10 # 추출 핵심 문장 갯수 설정
)
# 문장 출력부분
with open('kor_sentence_output.txt',mode='w',encoding='utf-8-sig') as file:
for sentence in sentencses:
file.write(sentence+'\n')
\ No newline at end of file
 이제: 2.7844
제가: 2.5266
오늘: 2.3003
근데: 2.2219
아이: 2.1413
감사합니다: 1.8192
안녕하세요: 1.8031
만약에: 1.6631
그래: 1.5640
얘기: 1.5291
No preview for this file type