백종현

Add source_code

Showing 94 changed files with 2074 additions and 0 deletions
No preview for this file type
1 +# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
2 +
3 +# dependencies
4 +/node_modules
5 +/.pnp
6 +.pnp.js
7 +
8 +# testing
9 +/coverage
10 +
11 +# production
12 +/build
13 +
14 +# misc
15 +.DS_Store
16 +.env.local
17 +.env.development.local
18 +.env.test.local
19 +.env.production.local
20 +
21 +npm-debug.log*
22 +yarn-debug.log*
23 +yarn-error.log*
1 +This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app).
2 +
3 +## Available Scripts
4 +
5 +In the project directory, you can run:
6 +
7 +### `npm start`
8 +
9 +Runs the app in the development mode.<br />
10 +Open [http://localhost:3000](http://localhost:3000) to view it in the browser.
11 +
12 +The page will reload if you make edits.<br />
13 +You will also see any lint errors in the console.
14 +
15 +### `npm test`
16 +
17 +Launches the test runner in the interactive watch mode.<br />
18 +See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information.
19 +
20 +### `npm run build`
21 +
22 +Builds the app for production to the `build` folder.<br />
23 +It correctly bundles React in production mode and optimizes the build for the best performance.
24 +
25 +The build is minified and the filenames include the hashes.<br />
26 +Your app is ready to be deployed!
27 +
28 +See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information.
29 +
30 +### `npm run eject`
31 +
32 +**Note: this is a one-way operation. Once you `eject`, you can’t go back!**
33 +
34 +If you aren’t satisfied with the build tool and configuration choices, you can `eject` at any time. This command will remove the single build dependency from your project.
35 +
36 +Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except `eject` will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
37 +
38 +You don’t have to ever use `eject`. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
39 +
40 +## Learn More
41 +
42 +You can learn more in the [Create React App documentation](https://facebook.github.io/create-react-app/docs/getting-started).
43 +
44 +To learn React, check out the [React documentation](https://reactjs.org/).
This diff could not be displayed because it is too large.
1 +{
2 + "name": "capstone",
3 + "version": "0.1.0",
4 + "private": true,
5 + "dependencies": {
6 + "@testing-library/jest-dom": "^4.2.4",
7 + "@testing-library/react": "^9.5.0",
8 + "@testing-library/user-event": "^7.2.1",
9 + "@types/jest": "^24.9.1",
10 + "@types/node": "^12.12.37",
11 + "@types/react": "^16.9.34",
12 + "@types/react-dom": "^16.9.7",
13 + "react": "^16.13.1",
14 + "react-dom": "^16.13.1",
15 + "react-scripts": "3.4.1",
16 + "typescript": "^3.7.5"
17 + },
18 + "scripts": {
19 + "start": "react-scripts start",
20 + "build": "react-scripts build",
21 + "test": "react-scripts test",
22 + "eject": "react-scripts eject"
23 + },
24 + "eslintConfig": {
25 + "extends": "react-app"
26 + },
27 + "browserslist": {
28 + "production": [
29 + ">0.2%",
30 + "not dead",
31 + "not op_mini all"
32 + ],
33 + "development": [
34 + "last 1 chrome version",
35 + "last 1 firefox version",
36 + "last 1 safari version"
37 + ]
38 + }
39 +}
No preview for this file type
1 +<!DOCTYPE html>
2 +<html lang="en">
3 + <head>
4 + <meta charset="utf-8" />
5 + <link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
6 + <meta name="viewport" content="width=device-width, initial-scale=1" />
7 + <meta name="theme-color" content="#000000" />
8 + <meta
9 + name="description"
10 + content="Web site created using create-react-app"
11 + />
12 + <link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
13 + <!--
14 + manifest.json provides metadata used when your web app is installed on a
15 + user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/
16 + -->
17 + <link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
18 + <!--
19 + Notice the use of %PUBLIC_URL% in the tags above.
20 + It will be replaced with the URL of the `public` folder during the build.
21 + Only files inside the `public` folder can be referenced from the HTML.
22 +
23 + Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
24 + work correctly both with client-side routing and a non-root public URL.
25 + Learn how to configure a non-root public URL by running `npm run build`.
26 + -->
27 + <title>React App</title>
28 + </head>
29 + <body>
30 + <noscript>You need to enable JavaScript to run this app.</noscript>
31 + <div id="root"></div>
32 + <!--
33 + This HTML file is a template.
34 + If you open it directly in the browser, you will see an empty page.
35 +
36 + You can add webfonts, meta tags, or analytics to this file.
37 + The build step will place the bundled scripts into the <body> tag.
38 +
39 + To begin the development, run `npm start` or `yarn start`.
40 + To create a production bundle, use `npm run build` or `yarn build`.
41 + -->
42 + </body>
43 +</html>
1 +{
2 + "short_name": "React App",
3 + "name": "Create React App Sample",
4 + "icons": [
5 + {
6 + "src": "favicon.ico",
7 + "sizes": "64x64 32x32 24x24 16x16",
8 + "type": "image/x-icon"
9 + },
10 + {
11 + "src": "logo192.png",
12 + "type": "image/png",
13 + "sizes": "192x192"
14 + },
15 + {
16 + "src": "logo512.png",
17 + "type": "image/png",
18 + "sizes": "512x512"
19 + }
20 + ],
21 + "start_url": ".",
22 + "display": "standalone",
23 + "theme_color": "#000000",
24 + "background_color": "#ffffff"
25 +}
1 +# https://www.robotstxt.org/robotstxt.html
2 +User-agent: *
3 +Disallow:
1 +.App {
2 + text-align: center;
3 +}
4 +
5 +.App-logo {
6 + height: 40vmin;
7 + pointer-events: none;
8 +}
9 +
10 +@media (prefers-reduced-motion: no-preference) {
11 + .App-logo {
12 + animation: App-logo-spin infinite 20s linear;
13 + }
14 +}
15 +
16 +.App-header {
17 + background-color: #282c34;
18 + min-height: 100vh;
19 + display: flex;
20 + flex-direction: column;
21 + align-items: center;
22 + justify-content: center;
23 + font-size: calc(10px + 2vmin);
24 + color: white;
25 +}
26 +
27 +.App-link {
28 + color: #61dafb;
29 +}
30 +
31 +@keyframes App-logo-spin {
32 + from {
33 + transform: rotate(0deg);
34 + }
35 + to {
36 + transform: rotate(360deg);
37 + }
38 +}
1 +import React from 'react';
2 +import { render } from '@testing-library/react';
3 +import App from './App';
4 +
5 +test('renders learn react link', () => {
6 + const { getByText } = render(<App />);
7 + const linkElement = getByText(/learn react/i);
8 + expect(linkElement).toBeInTheDocument();
9 +});
1 +import React from 'react';
2 +import logo from './logo.svg';
3 +import './App.css';
4 +
5 +function App() {
6 + return (
7 + <div className="App">
8 + <header className="App-header">
9 + <img src={logo} className="App-logo" alt="logo" />
10 + <p>
11 + Edit <code>src/App.tsx</code> and save to reload.
12 + </p>
13 + <a
14 + className="App-link"
15 + href="https://reactjs.org"
16 + target="_blank"
17 + rel="noopener noreferrer"
18 + >
19 + Learn React
20 + </a>
21 + </header>
22 + </div>
23 + );
24 +}
25 +
26 +export default App;
1 +body {
2 + margin: 0;
3 + font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
4 + 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
5 + sans-serif;
6 + -webkit-font-smoothing: antialiased;
7 + -moz-osx-font-smoothing: grayscale;
8 +}
9 +
10 +code {
11 + font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',
12 + monospace;
13 +}
1 +import React from 'react';
2 +import ReactDOM from 'react-dom';
3 +import './index.css';
4 +import App from './App';
5 +import * as serviceWorker from './serviceWorker';
6 +
7 +ReactDOM.render(
8 + <React.StrictMode>
9 + <App />
10 + </React.StrictMode>,
11 + document.getElementById('root')
12 +);
13 +
14 +// If you want your app to work offline and load faster, you can change
15 +// unregister() to register() below. Note this comes with some pitfalls.
16 +// Learn more about service workers: https://bit.ly/CRA-PWA
17 +serviceWorker.unregister();
1 +<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 841.9 595.3">
2 + <g fill="#61DAFB">
3 + <path d="M666.3 296.5c0-32.5-40.7-63.3-103.1-82.4 14.4-63.6 8-114.2-20.2-130.4-6.5-3.8-14.1-5.6-22.4-5.6v22.3c4.6 0 8.3.9 11.4 2.6 13.6 7.8 19.5 37.5 14.9 75.7-1.1 9.4-2.9 19.3-5.1 29.4-19.6-4.8-41-8.5-63.5-10.9-13.5-18.5-27.5-35.3-41.6-50 32.6-30.3 63.2-46.9 84-46.9V78c-27.5 0-63.5 19.6-99.9 53.6-36.4-33.8-72.4-53.2-99.9-53.2v22.3c20.7 0 51.4 16.5 84 46.6-14 14.7-28 31.4-41.3 49.9-22.6 2.4-44 6.1-63.6 11-2.3-10-4-19.7-5.2-29-4.7-38.2 1.1-67.9 14.6-75.8 3-1.8 6.9-2.6 11.5-2.6V78.5c-8.4 0-16 1.8-22.6 5.6-28.1 16.2-34.4 66.7-19.9 130.1-62.2 19.2-102.7 49.9-102.7 82.3 0 32.5 40.7 63.3 103.1 82.4-14.4 63.6-8 114.2 20.2 130.4 6.5 3.8 14.1 5.6 22.5 5.6 27.5 0 63.5-19.6 99.9-53.6 36.4 33.8 72.4 53.2 99.9 53.2 8.4 0 16-1.8 22.6-5.6 28.1-16.2 34.4-66.7 19.9-130.1 62-19.1 102.5-49.9 102.5-82.3zm-130.2-66.7c-3.7 12.9-8.3 26.2-13.5 39.5-4.1-8-8.4-16-13.1-24-4.6-8-9.5-15.8-14.4-23.4 14.2 2.1 27.9 4.7 41 7.9zm-45.8 106.5c-7.8 13.5-15.8 26.3-24.1 38.2-14.9 1.3-30 2-45.2 2-15.1 0-30.2-.7-45-1.9-8.3-11.9-16.4-24.6-24.2-38-7.6-13.1-14.5-26.4-20.8-39.8 6.2-13.4 13.2-26.8 20.7-39.9 7.8-13.5 15.8-26.3 24.1-38.2 14.9-1.3 30-2 45.2-2 15.1 0 30.2.7 45 1.9 8.3 11.9 16.4 24.6 24.2 38 7.6 13.1 14.5 26.4 20.8 39.8-6.3 13.4-13.2 26.8-20.7 39.9zm32.3-13c5.4 13.4 10 26.8 13.8 39.8-13.1 3.2-26.9 5.9-41.2 8 4.9-7.7 9.8-15.6 14.4-23.7 4.6-8 8.9-16.1 13-24.1zM421.2 430c-9.3-9.6-18.6-20.3-27.8-32 9 .4 18.2.7 27.5.7 9.4 0 18.7-.2 27.8-.7-9 11.7-18.3 22.4-27.5 32zm-74.4-58.9c-14.2-2.1-27.9-4.7-41-7.9 3.7-12.9 8.3-26.2 13.5-39.5 4.1 8 8.4 16 13.1 24 4.7 8 9.5 15.8 14.4 23.4zM420.7 163c9.3 9.6 18.6 20.3 27.8 32-9-.4-18.2-.7-27.5-.7-9.4 0-18.7.2-27.8.7 9-11.7 18.3-22.4 27.5-32zm-74 58.9c-4.9 7.7-9.8 15.6-14.4 23.7-4.6 8-8.9 16-13 24-5.4-13.4-10-26.8-13.8-39.8 13.1-3.1 26.9-5.8 41.2-7.9zm-90.5 125.2c-35.4-15.1-58.3-34.9-58.3-50.6 0-15.7 22.9-35.6 58.3-50.6 8.6-3.7 18-7 27.7-10.1 5.7 19.6 13.2 40 22.5 60.9-9.2 20.8-16.6 41.1-22.2 60.6-9.9-3.1-19.3-6.5-28-10.2zM310 490c-13.6-7.8-19.5-37.5-14.9-75.7 1.1-9.4 2.9-19.3 5.1-29.4 19.6 4.8 41 8.5 63.5 10.9 13.5 18.5 27.5 35.3 41.6 50-32.6 30.3-63.2 46.9-84 46.9-4.5-.1-8.3-1-11.3-2.7zm237.2-76.2c4.7 38.2-1.1 67.9-14.6 75.8-3 1.8-6.9 2.6-11.5 2.6-20.7 0-51.4-16.5-84-46.6 14-14.7 28-31.4 41.3-49.9 22.6-2.4 44-6.1 63.6-11 2.3 10.1 4.1 19.8 5.2 29.1zm38.5-66.7c-8.6 3.7-18 7-27.7 10.1-5.7-19.6-13.2-40-22.5-60.9 9.2-20.8 16.6-41.1 22.2-60.6 9.9 3.1 19.3 6.5 28.1 10.2 35.4 15.1 58.3 34.9 58.3 50.6-.1 15.7-23 35.6-58.4 50.6zM320.8 78.4z"/>
4 + <circle cx="420.9" cy="296.5" r="45.7"/>
5 + <path d="M520.5 78.1z"/>
6 + </g>
7 +</svg>
1 +/// <reference types="react-scripts" />
1 +// This optional code is used to register a service worker.
2 +// register() is not called by default.
3 +
4 +// This lets the app load faster on subsequent visits in production, and gives
5 +// it offline capabilities. However, it also means that developers (and users)
6 +// will only see deployed updates on subsequent visits to a page, after all the
7 +// existing tabs open on the page have been closed, since previously cached
8 +// resources are updated in the background.
9 +
10 +// To learn more about the benefits of this model and instructions on how to
11 +// opt-in, read https://bit.ly/CRA-PWA
12 +
13 +const isLocalhost = Boolean(
14 + window.location.hostname === 'localhost' ||
15 + // [::1] is the IPv6 localhost address.
16 + window.location.hostname === '[::1]' ||
17 + // 127.0.0.0/8 are considered localhost for IPv4.
18 + window.location.hostname.match(
19 + /^127(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}$/
20 + )
21 +);
22 +
23 +type Config = {
24 + onSuccess?: (registration: ServiceWorkerRegistration) => void;
25 + onUpdate?: (registration: ServiceWorkerRegistration) => void;
26 +};
27 +
28 +export function register(config?: Config) {
29 + if (process.env.NODE_ENV === 'production' && 'serviceWorker' in navigator) {
30 + // The URL constructor is available in all browsers that support SW.
31 + const publicUrl = new URL(
32 + process.env.PUBLIC_URL,
33 + window.location.href
34 + );
35 + if (publicUrl.origin !== window.location.origin) {
36 + // Our service worker won't work if PUBLIC_URL is on a different origin
37 + // from what our page is served on. This might happen if a CDN is used to
38 + // serve assets; see https://github.com/facebook/create-react-app/issues/2374
39 + return;
40 + }
41 +
42 + window.addEventListener('load', () => {
43 + const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`;
44 +
45 + if (isLocalhost) {
46 + // This is running on localhost. Let's check if a service worker still exists or not.
47 + checkValidServiceWorker(swUrl, config);
48 +
49 + // Add some additional logging to localhost, pointing developers to the
50 + // service worker/PWA documentation.
51 + navigator.serviceWorker.ready.then(() => {
52 + console.log(
53 + 'This web app is being served cache-first by a service ' +
54 + 'worker. To learn more, visit https://bit.ly/CRA-PWA'
55 + );
56 + });
57 + } else {
58 + // Is not localhost. Just register service worker
59 + registerValidSW(swUrl, config);
60 + }
61 + });
62 + }
63 +}
64 +
65 +function registerValidSW(swUrl: string, config?: Config) {
66 + navigator.serviceWorker
67 + .register(swUrl)
68 + .then(registration => {
69 + registration.onupdatefound = () => {
70 + const installingWorker = registration.installing;
71 + if (installingWorker == null) {
72 + return;
73 + }
74 + installingWorker.onstatechange = () => {
75 + if (installingWorker.state === 'installed') {
76 + if (navigator.serviceWorker.controller) {
77 + // At this point, the updated precached content has been fetched,
78 + // but the previous service worker will still serve the older
79 + // content until all client tabs are closed.
80 + console.log(
81 + 'New content is available and will be used when all ' +
82 + 'tabs for this page are closed. See https://bit.ly/CRA-PWA.'
83 + );
84 +
85 + // Execute callback
86 + if (config && config.onUpdate) {
87 + config.onUpdate(registration);
88 + }
89 + } else {
90 + // At this point, everything has been precached.
91 + // It's the perfect time to display a
92 + // "Content is cached for offline use." message.
93 + console.log('Content is cached for offline use.');
94 +
95 + // Execute callback
96 + if (config && config.onSuccess) {
97 + config.onSuccess(registration);
98 + }
99 + }
100 + }
101 + };
102 + };
103 + })
104 + .catch(error => {
105 + console.error('Error during service worker registration:', error);
106 + });
107 +}
108 +
109 +function checkValidServiceWorker(swUrl: string, config?: Config) {
110 + // Check if the service worker can be found. If it can't reload the page.
111 + fetch(swUrl, {
112 + headers: { 'Service-Worker': 'script' }
113 + })
114 + .then(response => {
115 + // Ensure service worker exists, and that we really are getting a JS file.
116 + const contentType = response.headers.get('content-type');
117 + if (
118 + response.status === 404 ||
119 + (contentType != null && contentType.indexOf('javascript') === -1)
120 + ) {
121 + // No service worker found. Probably a different app. Reload the page.
122 + navigator.serviceWorker.ready.then(registration => {
123 + registration.unregister().then(() => {
124 + window.location.reload();
125 + });
126 + });
127 + } else {
128 + // Service worker found. Proceed as normal.
129 + registerValidSW(swUrl, config);
130 + }
131 + })
132 + .catch(() => {
133 + console.log(
134 + 'No internet connection found. App is running in offline mode.'
135 + );
136 + });
137 +}
138 +
139 +export function unregister() {
140 + if ('serviceWorker' in navigator) {
141 + navigator.serviceWorker.ready
142 + .then(registration => {
143 + registration.unregister();
144 + })
145 + .catch(error => {
146 + console.error(error.message);
147 + });
148 + }
149 +}
1 +// jest-dom adds custom jest matchers for asserting on DOM nodes.
2 +// allows you to do things like:
3 +// expect(element).toHaveTextContent(/react/i)
4 +// learn more: https://github.com/testing-library/jest-dom
5 +import '@testing-library/jest-dom/extend-expect';
1 +{
2 + "compilerOptions": {
3 + "target": "es5",
4 + "lib": [
5 + "dom",
6 + "dom.iterable",
7 + "esnext"
8 + ],
9 + "allowJs": true,
10 + "skipLibCheck": true,
11 + "esModuleInterop": true,
12 + "allowSyntheticDefaultImports": true,
13 + "strict": true,
14 + "forceConsistentCasingInFileNames": true,
15 + "module": "esnext",
16 + "moduleResolution": "node",
17 + "resolveJsonModule": true,
18 + "isolatedModules": true,
19 + "noEmit": true,
20 + "jsx": "react"
21 + },
22 + "include": [
23 + "src"
24 + ]
25 +}
1 +HELP.md
2 +.gradle
3 +build/
4 +!gradle/wrapper/gradle-wrapper.jar
5 +!**/src/main/**
6 +!**/src/test/**
7 +
8 +### STS ###
9 +.apt_generated
10 +.classpath
11 +.factorypath
12 +.project
13 +.settings
14 +.springBeans
15 +.sts4-cache
16 +
17 +### IntelliJ IDEA ###
18 +.idea
19 +*.iws
20 +*.iml
21 +*.ipr
22 +out/
23 +
24 +### NetBeans ###
25 +/nbproject/private/
26 +/nbbuild/
27 +/dist/
28 +/nbdist/
29 +/.nb-gradle/
30 +
31 +### VS Code ###
32 +.vscode/
1 +plugins {
2 + id 'org.springframework.boot' version '2.2.6.RELEASE'
3 + id 'io.spring.dependency-management' version '1.0.9.RELEASE'
4 + id 'java'
5 +}
6 +
7 +group = 'com.capstone'
8 +version = '0.0.1-SNAPSHOT'
9 +sourceCompatibility = '11'
10 +
11 +configurations {
12 + compileOnly {
13 + extendsFrom annotationProcessor
14 + }
15 +}
16 +
17 +repositories {
18 + mavenCentral()
19 +}
20 +
21 +dependencies {
22 + implementation platform("com.google.cloud:libraries-bom:4.0.0");
23 + implementation 'com.google.cloud:google-cloud-storage'
24 + implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
25 + implementation 'org.springframework.boot:spring-boot-starter-webflux'
26 + compileOnly 'org.projectlombok:lombok'
27 + runtimeOnly 'com.h2database:h2'
28 + annotationProcessor 'org.projectlombok:lombok'
29 + testImplementation('org.springframework.boot:spring-boot-starter-test') {
30 + exclude group: 'org.junit.vintage', module: 'junit-vintage-engine'
31 + }
32 +}
33 +
34 +test {
35 + useJUnitPlatform()
36 +}
1 +distributionBase=GRADLE_USER_HOME
2 +distributionPath=wrapper/dists
3 +distributionUrl=https\://services.gradle.org/distributions/gradle-6.3-bin.zip
4 +zipStoreBase=GRADLE_USER_HOME
5 +zipStorePath=wrapper/dists
1 +#!/usr/bin/env sh
2 +
3 +#
4 +# Copyright 2015 the original author or authors.
5 +#
6 +# Licensed under the Apache License, Version 2.0 (the "License");
7 +# you may not use this file except in compliance with the License.
8 +# You may obtain a copy of the License at
9 +#
10 +# https://www.apache.org/licenses/LICENSE-2.0
11 +#
12 +# Unless required by applicable law or agreed to in writing, software
13 +# distributed under the License is distributed on an "AS IS" BASIS,
14 +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 +# See the License for the specific language governing permissions and
16 +# limitations under the License.
17 +#
18 +
19 +##############################################################################
20 +##
21 +## Gradle start up script for UN*X
22 +##
23 +##############################################################################
24 +
25 +# Attempt to set APP_HOME
26 +# Resolve links: $0 may be a link
27 +PRG="$0"
28 +# Need this for relative symlinks.
29 +while [ -h "$PRG" ]; do
30 + ls=$(ls -ld "$PRG")
31 + link=$(expr "$ls" : '.*-> \(.*\)$')
32 + if expr "$link" : '/.*' >/dev/null; then
33 + PRG="$link"
34 + else
35 + PRG=$(dirname "$PRG")"/$link"
36 + fi
37 +done
38 +SAVED="$(pwd)"
39 +cd "$(dirname \"$PRG\")/" >/dev/null
40 +APP_HOME="$(pwd -P)"
41 +cd "$SAVED" >/dev/null
42 +
43 +APP_NAME="Gradle"
44 +APP_BASE_NAME=$(basename "$0")
45 +
46 +# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
47 +DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
48 +
49 +# Use the maximum available, or set MAX_FD != -1 to use that value.
50 +MAX_FD="maximum"
51 +
52 +warn() {
53 + echo "$*"
54 +}
55 +
56 +die() {
57 + echo
58 + echo "$*"
59 + echo
60 + exit 1
61 +}
62 +
63 +# OS specific support (must be 'true' or 'false').
64 +cygwin=false
65 +msys=false
66 +darwin=false
67 +nonstop=false
68 +case "$(uname)" in
69 +CYGWIN*)
70 + cygwin=true
71 + ;;
72 +Darwin*)
73 + darwin=true
74 + ;;
75 +MINGW*)
76 + msys=true
77 + ;;
78 +NONSTOP*)
79 + nonstop=true
80 + ;;
81 +esac
82 +
83 +CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
84 +
85 +# Determine the Java command to use to start the JVM.
86 +if [ -n "$JAVA_HOME" ]; then
87 + if [ -x "$JAVA_HOME/jre/sh/java" ]; then
88 + # IBM's JDK on AIX uses strange locations for the executables
89 + JAVACMD="$JAVA_HOME/jre/sh/java"
90 + else
91 + JAVACMD="$JAVA_HOME/bin/java"
92 + fi
93 + if [ ! -x "$JAVACMD" ]; then
94 + die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
95 +
96 +Please set the JAVA_HOME variable in your environment to match the
97 +location of your Java installation."
98 + fi
99 +else
100 + JAVACMD="java"
101 + which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
102 +
103 +Please set the JAVA_HOME variable in your environment to match the
104 +location of your Java installation."
105 +fi
106 +
107 +# Increase the maximum file descriptors if we can.
108 +if [ "$cygwin" = "false" -a "$darwin" = "false" -a "$nonstop" = "false" ]; then
109 + MAX_FD_LIMIT=$(ulimit -H -n)
110 + if [ $? -eq 0 ]; then
111 + if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ]; then
112 + MAX_FD="$MAX_FD_LIMIT"
113 + fi
114 + ulimit -n $MAX_FD
115 + if [ $? -ne 0 ]; then
116 + warn "Could not set maximum file descriptor limit: $MAX_FD"
117 + fi
118 + else
119 + warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
120 + fi
121 +fi
122 +
123 +# For Darwin, add options to specify how the application appears in the dock
124 +if $darwin; then
125 + GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
126 +fi
127 +
128 +# For Cygwin or MSYS, switch paths to Windows format before running java
129 +if [ "$cygwin" = "true" -o "$msys" = "true" ]; then
130 + APP_HOME=$(cygpath --path --mixed "$APP_HOME")
131 + CLASSPATH=$(cygpath --path --mixed "$CLASSPATH")
132 + JAVACMD=$(cygpath --unix "$JAVACMD")
133 +
134 + # We build the pattern for arguments to be converted via cygpath
135 + ROOTDIRSRAW=$(find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null)
136 + SEP=""
137 + for dir in $ROOTDIRSRAW; do
138 + ROOTDIRS="$ROOTDIRS$SEP$dir"
139 + SEP="|"
140 + done
141 + OURCYGPATTERN="(^($ROOTDIRS))"
142 + # Add a user-defined pattern to the cygpath arguments
143 + if [ "$GRADLE_CYGPATTERN" != "" ]; then
144 + OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
145 + fi
146 + # Now convert the arguments - kludge to limit ourselves to /bin/sh
147 + i=0
148 + for arg in "$@"; do
149 + CHECK=$(echo "$arg" | egrep -c "$OURCYGPATTERN" -)
150 + CHECK2=$(echo "$arg" | egrep -c "^-") ### Determine if an option
151 +
152 + if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ]; then ### Added a condition
153 + eval $(echo args$i)=$(cygpath --path --ignore --mixed "$arg")
154 + else
155 + eval $(echo args$i)="\"$arg\""
156 + fi
157 + i=$(expr $i + 1)
158 + done
159 + case $i in
160 + 0) set -- ;;
161 + 1) set -- "$args0" ;;
162 + 2) set -- "$args0" "$args1" ;;
163 + 3) set -- "$args0" "$args1" "$args2" ;;
164 + 4) set -- "$args0" "$args1" "$args2" "$args3" ;;
165 + 5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
166 + 6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
167 + 7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
168 + 8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
169 + 9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
170 + esac
171 +fi
172 +
173 +# Escape application args
174 +save() {
175 + for i; do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/"; done
176 + echo " "
177 +}
178 +APP_ARGS=$(save "$@")
179 +
180 +# Collect all arguments for the java command, following the shell quoting and substitution rules
181 +eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS"
182 +
183 +exec "$JAVACMD" "$@"
1 +@rem
2 +@rem Copyright 2015 the original author or authors.
3 +@rem
4 +@rem Licensed under the Apache License, Version 2.0 (the "License");
5 +@rem you may not use this file except in compliance with the License.
6 +@rem You may obtain a copy of the License at
7 +@rem
8 +@rem https://www.apache.org/licenses/LICENSE-2.0
9 +@rem
10 +@rem Unless required by applicable law or agreed to in writing, software
11 +@rem distributed under the License is distributed on an "AS IS" BASIS,
12 +@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 +@rem See the License for the specific language governing permissions and
14 +@rem limitations under the License.
15 +@rem
16 +
17 +@if "%DEBUG%" == "" @echo off
18 +@rem ##########################################################################
19 +@rem
20 +@rem Gradle startup script for Windows
21 +@rem
22 +@rem ##########################################################################
23 +
24 +@rem Set local scope for the variables with windows NT shell
25 +if "%OS%"=="Windows_NT" setlocal
26 +
27 +set DIRNAME=%~dp0
28 +if "%DIRNAME%" == "" set DIRNAME=.
29 +set APP_BASE_NAME=%~n0
30 +set APP_HOME=%DIRNAME%
31 +
32 +@rem Resolve any "." and ".." in APP_HOME to make it shorter.
33 +for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
34 +
35 +@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
36 +set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"
37 +
38 +@rem Find java.exe
39 +if defined JAVA_HOME goto findJavaFromJavaHome
40 +
41 +set JAVA_EXE=java.exe
42 +%JAVA_EXE% -version >NUL 2>&1
43 +if "%ERRORLEVEL%" == "0" goto init
44 +
45 +echo.
46 +echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
47 +echo.
48 +echo Please set the JAVA_HOME variable in your environment to match the
49 +echo location of your Java installation.
50 +
51 +goto fail
52 +
53 +:findJavaFromJavaHome
54 +set JAVA_HOME=%JAVA_HOME:"=%
55 +set JAVA_EXE=%JAVA_HOME%/bin/java.exe
56 +
57 +if exist "%JAVA_EXE%" goto init
58 +
59 +echo.
60 +echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
61 +echo.
62 +echo Please set the JAVA_HOME variable in your environment to match the
63 +echo location of your Java installation.
64 +
65 +goto fail
66 +
67 +:init
68 +@rem Get command-line arguments, handling Windows variants
69 +
70 +if not "%OS%" == "Windows_NT" goto win9xME_args
71 +
72 +:win9xME_args
73 +@rem Slurp the command line arguments.
74 +set CMD_LINE_ARGS=
75 +set _SKIP=2
76 +
77 +:win9xME_args_slurp
78 +if "x%~1" == "x" goto execute
79 +
80 +set CMD_LINE_ARGS=%*
81 +
82 +:execute
83 +@rem Setup the command line
84 +
85 +set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
86 +
87 +@rem Execute Gradle
88 +"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS%
89 +
90 +:end
91 +@rem End local scope for the variables with windows NT shell
92 +if "%ERRORLEVEL%"=="0" goto mainEnd
93 +
94 +:fail
95 +rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
96 +rem the _cmd.exe /c_ return code!
97 +if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
98 +exit /b 1
99 +
100 +:mainEnd
101 +if "%OS%"=="Windows_NT" endlocal
102 +
103 +:omega
1 +package com.capstone.web;
2 +
3 +import org.springframework.boot.SpringApplication;
4 +import org.springframework.boot.autoconfigure.SpringBootApplication;
5 +
6 +@SpringBootApplication
7 +public class WebApplication {
8 +
9 + public static void main(String[] args) {
10 + SpringApplication.run(WebApplication.class, args);
11 + }
12 +
13 +}
1 +package com.capstone.web.config;
2 +
3 +import com.google.cloud.storage.Storage;
4 +import com.google.cloud.storage.StorageOptions;
5 +import org.springframework.beans.factory.annotation.Configurable;
6 +import org.springframework.context.annotation.Bean;
7 +import org.springframework.context.annotation.Configuration;
8 +
9 +@Configuration
10 +@Configurable
11 +public class GCStorage {
12 + @Bean
13 + public Storage storage() {
14 + return StorageOptions.getDefaultInstance().getService();
15 + }
16 +}
1 +package com.capstone.web.controller;
2 +
3 +import com.capstone.web.dto.EmotionResponseDto;
4 +import com.capstone.web.dto.ScriptResponseDto;
5 +import com.capstone.web.dto.VideoResponseDto;
6 +import com.capstone.web.service.GCSReaderService;
7 +import org.springframework.web.bind.annotation.GetMapping;
8 +import org.springframework.web.bind.annotation.PostMapping;
9 +import org.springframework.web.bind.annotation.RequestParam;
10 +import org.springframework.web.bind.annotation.RestController;
11 +import reactor.core.publisher.Mono;
12 +
13 +import java.util.List;
14 +
15 +@RestController
16 +public class GCSController {
17 + private final GCSReaderService gcsReaderService;
18 +
19 + public GCSController(GCSReaderService GCSReaderService) {
20 + this.gcsReaderService = GCSReaderService;
21 + }
22 +
23 +
24 + @GetMapping("/get-all-videos")
25 + public List<VideoResponseDto> getAllVideos(@RequestParam(name = "storageName") String name) {
26 + return gcsReaderService.getAllVideos(name);
27 + }
28 +
29 + //GCS에서 다운 => 자르고 스크립트에 하나씩 생성
30 + @GetMapping("/get-script-result")
31 + public Mono<ScriptResponseDto> getScriptResult(@RequestParam(name = "videoName") String name) {
32 + return gcsReaderService.getScriptResult(name);
33 + }
34 +
35 + @GetMapping("/get-emotion")
36 + public Mono<EmotionResponseDto> getEmotionResult(@RequestParam(name = "videoName") String name) {
37 + return gcsReaderService.getEmotionResult(name);
38 + }
39 +
40 + @GetMapping("/get-chat-result")
41 + public Mono<String> getChatResult(@RequestParam(name = "videoName") String name) {
42 + return gcsReaderService.getChatResult(name);
43 + }
44 +
45 + @GetMapping("/get-decibel-result")
46 + public Mono<String> getDecibelResult(@RequestParam(name = "videoName") String name) {
47 + return gcsReaderService.getDecibelResult(name);
48 + }
49 +}
50 +
51 +
52 +//name: test123.wav => name에 directory/file 형식으로 나옴
53 +//Time Created : 이후 스크립트 작성에 필요할듯
54 +//content Type : video, audio, text/plain
...\ No newline at end of file ...\ No newline at end of file
1 +package com.capstone.web.dto;
2 +
3 +import lombok.Builder;
4 +import lombok.Getter;
5 +
6 +@Getter
7 +public class EmotionItem {
8 + Integer start;
9 + Integer end;
10 +
11 + @Builder
12 + public EmotionItem(Integer start, Integer end){
13 + this.start = start;
14 + this.end = end;
15 + }
16 +}
1 +package com.capstone.web.dto;
2 +
3 +import lombok.AllArgsConstructor;
4 +import lombok.Getter;
5 +
6 +import java.util.ArrayList;
7 +import java.util.List;
8 +
9 +@Getter
10 +@AllArgsConstructor
11 +public class EmotionResponseDto {
12 + List<EmotionItem> EmotionEditList = new ArrayList<>();
13 +}
1 +package com.capstone.web.dto;
2 +
3 +import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
4 +import lombok.AllArgsConstructor;
5 +import lombok.Builder;
6 +import lombok.Getter;
7 +
8 +import java.util.ArrayList;
9 +import java.util.List;
10 +
11 +@Getter
12 +@JsonIgnoreProperties(ignoreUnknown = true)
13 +public class ScriptResponseDto {
14 + private String fullScript;
15 + private List<TopicEditItem> topicEditList = new ArrayList<>();
16 +
17 + @Builder
18 + public ScriptResponseDto(String fullScript, List<TopicEditItem> topicEditList){
19 + this.fullScript = fullScript;
20 + this.topicEditList = topicEditList;
21 + }
22 +}
1 +package com.capstone.web.dto;
2 +
3 +import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
4 +import lombok.AllArgsConstructor;
5 +import lombok.Builder;
6 +import lombok.Getter;
7 +import lombok.NoArgsConstructor;
8 +
9 +@Getter
10 +@JsonIgnoreProperties(ignoreUnknown = true)
11 +public class TopicEditItem {
12 + Integer start;
13 + Integer end;
14 + String topic;
15 +
16 + @Builder
17 + public TopicEditItem(Integer start, Integer end, String topic){
18 + this.start = start;
19 + this.end = end;
20 + this.topic = topic;
21 + }
22 +}
1 +package com.capstone.web.dto;
2 +
3 +import lombok.Builder;
4 +import lombok.Getter;
5 +import lombok.NoArgsConstructor;
6 +
7 +import java.util.Date;
8 +
9 +@Getter
10 +@NoArgsConstructor
11 +public class VideoResponseDto {
12 + private String name;
13 + //mp4, mov ....
14 + private String extension;
15 + private Date createdTime;
16 +
17 + @Builder
18 + public VideoResponseDto(String name, String extension, Long createdTime) {
19 + this.name = name;
20 + this.extension = extension;
21 + this.createdTime = new Date(createdTime);
22 + }
23 +}
1 +package com.capstone.web.service;
2 +
3 +import com.capstone.web.dto.EmotionResponseDto;
4 +import com.capstone.web.dto.ScriptResponseDto;
5 +import com.capstone.web.dto.VideoResponseDto;
6 +import com.google.api.client.util.Lists;
7 +import com.google.cloud.storage.Blob;
8 +import com.google.cloud.storage.Bucket;
9 +import com.google.cloud.storage.Storage;
10 +import org.springframework.beans.factory.annotation.Autowired;
11 +import org.springframework.stereotype.Service;
12 +import org.springframework.web.reactive.function.client.WebClient;
13 +import reactor.core.publisher.Mono;
14 +
15 +import java.util.Date;
16 +import java.util.List;
17 +import java.util.Map;
18 +import java.util.stream.Collectors;
19 +
20 +@Service
21 +public class GCSReaderService {
22 +
23 + @Autowired
24 + private final Storage storage;
25 +
26 + @Autowired
27 + WebClient.Builder builder;
28 +
29 + public GCSReaderService(Storage storage) {
30 + this.storage = storage;
31 + }
32 +
33 + public List<VideoResponseDto> getAllVideos(String userName) {
34 + Bucket bucket = storage.get(userName, Storage.BucketGetOption.fields(Storage.BucketField.values()));
35 + return Lists.newArrayList(bucket.list().iterateAll())
36 + .stream()
37 + .filter(this::isVideo)
38 + .map(this::blobToVideoResponseDto)
39 + .collect(Collectors.toList());
40 + }
41 +
42 + public Mono<ScriptResponseDto> getScriptResult(String name) {
43 + WebClient webClient = builder.baseUrl("http://localhost:5000").build();
44 + return webClient.get()
45 + .uri(uriBuilder -> uriBuilder.path("/script-api")
46 + .queryParam("fileName", name)
47 + .build())
48 + .retrieve()
49 + .bodyToMono(ScriptResponseDto.class);
50 + }
51 +
52 + public Mono<EmotionResponseDto> getEmotionResult(String name) {
53 + WebClient webClient = builder.baseUrl("http://localhost:5000").build();
54 + return webClient.get()
55 + .uri(uriBuilder -> uriBuilder.path("/emotion-api")
56 + .queryParam("fileName", name)
57 + .build())
58 + .retrieve()
59 + .bodyToMono(EmotionResponseDto.class);
60 + }
61 +
62 + public Mono<String> getChatResult(String name) {
63 + WebClient webCilent = builder.baseUrl("http://chathost:5000").build();
64 + return webCilent.get()
65 + .uri(uriBuilder -> uriBuilder.path("/chat-api")
66 + .queryParam("fileName", name)
67 + .build())
68 + .retrieve()
69 + .bodyToMono(String.class);
70 + }
71 +
72 + public Mono<String> getDecibelResult(String name) {
73 + WebClient webCilent = builder.build();
74 + return webCilent.get()
75 + .uri(uriBuilder -> uriBuilder.path("http://localhost:5000/decibel-api")
76 + .queryParam("fileName", name)
77 + .build())
78 + .retrieve()
79 + .bodyToMono(String.class);
80 + }
81 +
82 + private VideoResponseDto blobToVideoResponseDto(Blob blob) {
83 + return VideoResponseDto.builder()
84 + .name(getVideoName(blob.getName()))
85 + .createdTime(blob.getCreateTime())
86 + .extension(getVideoExtension(blob.getContentType()))
87 + .build();
88 + }
89 +
90 + private String getVideoName(String name) {
91 + return name.split("/")[0];
92 + }
93 +
94 + private boolean isVideo(Blob blob) {
95 + return blob.getContentType().contains("video");
96 + }
97 +
98 + private String getVideoExtension(String contentType) {
99 + return contentType.split("/")[1];
100 + }
101 +
102 + private void printBlobAllMetaData(Blob blob) {
103 + // Print blob metadata
104 + System.out.println("======================================\n");
105 + System.out.println("Bucket: " + blob.getBucket());
106 + System.out.println("CacheControl: " + blob.getCacheControl());
107 + System.out.println("ComponentCount: " + blob.getComponentCount());
108 + System.out.println("ContentDisposition: " + blob.getContentDisposition());
109 + System.out.println("ContentEncoding: " + blob.getContentEncoding());
110 + System.out.println("ContentLanguage: " + blob.getContentLanguage());
111 + System.out.println("ContentType: " + blob.getContentType());
112 + System.out.println("Crc32c: " + blob.getCrc32c());
113 + System.out.println("Crc32cHexString: " + blob.getCrc32cToHexString());
114 + System.out.println("ETag: " + blob.getEtag());
115 + System.out.println("Generation: " + blob.getGeneration());
116 + System.out.println("Id: " + blob.getBlobId());
117 + System.out.println("KmsKeyName: " + blob.getKmsKeyName());
118 + System.out.println("Md5Hash: " + blob.getMd5());
119 + System.out.println("Md5HexString: " + blob.getMd5ToHexString());
120 + System.out.println("MediaLink: " + blob.getMediaLink());
121 + System.out.println("Metageneration: " + blob.getMetageneration());
122 + System.out.println("Name: " + blob.getName());
123 + System.out.println("Size: " + blob.getSize());
124 + System.out.println("StorageClass: " + blob.getStorageClass());
125 + System.out.println("TimeCreated: " + new Date(blob.getCreateTime()));
126 + System.out.println("Last Metadata Update: " + new Date(blob.getUpdateTime()));
127 + Boolean temporaryHoldIsEnabled = (blob.getTemporaryHold() != null && blob.getTemporaryHold());
128 + System.out.println("temporaryHold: " + (temporaryHoldIsEnabled ? "enabled" : "disabled"));
129 + Boolean eventBasedHoldIsEnabled =
130 + (blob.getEventBasedHold() != null && blob.getEventBasedHold());
131 + System.out.println("eventBasedHold: " + (eventBasedHoldIsEnabled ? "enabled" : "disabled"));
132 + if (blob.getRetentionExpirationTime() != null) {
133 + System.out.println("retentionExpirationTime: " + new Date(blob.getRetentionExpirationTime()));
134 + }
135 + if (blob.getMetadata() != null) {
136 + System.out.println("\n\n\nUser metadata:");
137 + for (Map.Entry<String, String> userMetadata : blob.getMetadata().entrySet()) {
138 + System.out.println(userMetadata.getKey() + "=" + userMetadata.getValue());
139 + }
140 + }
141 + }
142 +}
1 +package com.capstone.web;
2 +
3 +import org.junit.jupiter.api.Test;
4 +import org.springframework.boot.test.context.SpringBootTest;
5 +
6 +@SpringBootTest
7 +class WebApplicationTests {
8 +
9 + @Test
10 + void contextLoads() {
11 + }
12 +
13 +}
1 +capstone-sptt.json
2 +__pycache__/
3 +*.csv
4 +*.png
5 +.ipynb_checkpoints/
1 +FROM ubuntu:16.04
2 +WORKDIR /root
3 +EXPOSE 5000
4 +
5 +ENV PROJ_NAME=static-protocol-264107
6 +ENV LC_ALL=C.UTF-8
7 +ENV LANG=C.UTF-8
8 +
9 +COPY ./*.py /root/
10 +
11 +RUN apt-get -y update && apt-get -y install python3 python3-pip curl
12 +RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
13 +
14 +COPY ./capstone-sptt.json /root/credential_key.json
15 +RUN gcloud auth activate-service-account --key-file=credential_key.json && gcloud config set project $PROJ_NAME
16 +
17 +RUN pip3 install --upgrade pip && pip3 install --upgrade google-cloud-storage && pip3 install --upgrade google-cloud-speech && pip3 install flask flask_cors
18 +RUN pip3 install pandas matplotlib
19 +
20 +RUN gcloud auth activate-service-account --key-file credential_key.json
21 +ENV GOOGLE_APPLICATION_CREDENTIALS="/root/credential_key.json"
22 +
23 +ENTRYPOINT [ "flask", "run" , "--host", "0.0.0.0"]
...\ No newline at end of file ...\ No newline at end of file
1 +from flask import Flask, request, jsonify
2 +from flask_cors import CORS, cross_origin
3 +from chatDownloader import *
4 +app = Flask(__name__)
5 +cors = CORS(app)
6 +app.config['CORS_HEADERS'] = 'Content-Type'
7 +
8 +@app.route('/chat-api')
9 +@cross_origin()
10 +def chat_analysis():
11 + bucket_name = "capstone-sptt-storage"
12 + file_name = request.args.get("fileName")
13 + destination_file_name = "chat.csv"
14 + download_file(bucket_name, file_name, destination_file_name)
15 + return jsonify(analysis(bucket_name, file_name))
16 +
17 +if __name__ == "__main__":
18 + app.run()
...\ No newline at end of file ...\ No newline at end of file
1 +'''
2 +선정 방법
3 +1. Median Filter를 사용해 noise 제거 - OK
4 +2. Local Minimum 구하기 (30분 간격을 나누기)
5 +3. 임계값 이상의 값 구하기 (Local Minimum 중 가장 작은 값으로 선정)
6 +4. 간격? 기존 : 앞뒤로 1분
7 +5. 1분 간격에 위의 구간에 만족하는 피크가 있는 경우 구간 연결
8 +'''
9 +import math
10 +import requests
11 +import json
12 +import sys
13 +import time
14 +import csv
15 +import pandas as pd
16 +import numpy as np
17 +from importlib import reload
18 +from google.cloud import storage
19 +from collections import OrderedDict
20 +
21 +def convert_to_sec(time) :
22 + splited_time = time.split(':')
23 + hours = int(splited_time[0])
24 + minutes = int(splited_time[1])
25 + seconds = int(splited_time[2])
26 + return (hours * 3600) + (minutes * 60) + seconds
27 +
28 +def convert_to_interval(idx) :
29 + end = idx * 120
30 + start = end - 120
31 + return str(start) + " - " + str(end)
32 +
33 +def convert_to_start(time) :
34 + strip_str = time.strip()
35 + start = strip_str.split('-')[0]
36 + return int(start)
37 +
38 +def convert_to_end(time) :
39 + strip_str = time.strip()
40 + end = strip_str.split('-')[1]
41 + return int(end)
42 +
43 +def median_filter(data,filter_size) :
44 + for x in range(len(data)) :
45 + median_list = []
46 + for index in range(x-filter_size, x+filter_size+1) :
47 + if (index >= 0 and index < len(data)) :
48 + median_list.append(data[index])
49 + data[x] = get_median_value(median_list)
50 + return data
51 +
52 +def get_median_value(median_list) :
53 + median_idx = len(median_list)//2
54 + median_list.sort()
55 + return median_list[median_idx]
56 +
57 +def get_frequency_graph_url(timeCountSeries, file_name, bucket_name) :
58 + ax = timeCountSeries.plot(title='chat numbers', figsize=(20, 5))
59 + fig = ax.get_figure()
60 + fig.savefig(str(file_name)+'.png')
61 + return upload_to_GCS(bucket_name, file_name)
62 +
63 +def get_local_maximum_df(time_count_df):
64 + max_time = time_count_df['time'].max()
65 + bins = np.arange(0,max_time,900)
66 + ind = np.digitize(time_count_df["time"], bins)
67 + time_count_df["location"] = ind
68 + location_groups = time_count_df.groupby('location')
69 + local_maximum_df = pd.DataFrame(columns = ['time','chat_count', 'location'])
70 + for location, location_group in location_groups:
71 + local_maximum = location_group.sort_values(by='chat_count').tail(1)
72 + local_maximum_df = local_maximum_df.append(local_maximum)
73 + return local_maximum_df
74 +
75 +def get_increase_df(time_count_df) :
76 +
77 + increase_threshold = math.ceil(time_count_df['chat_count'].mean())-1
78 + cond = ( time_count_df["chat_count"] - time_count_df["chat_count"].shift(-1) ) > increase_threshold
79 + increase_df = time_count_df[cond]
80 + print(increase_df)
81 + return increase_df
82 +
83 +def get_interval_list(peak_df, local_maximum_df, time_count_df):
84 + peak_time_list = peak_df['time'].to_list()
85 + result_json = []
86 + for time in peak_time_list :
87 + start = time-60
88 + end = time+60
89 + local_maximum_list = local_maximum_df.query('time<=@time')['chat_count'].tail(1).to_list()
90 +
91 + # if (len(local_maximum_list) > 0) :
92 + # local_maximum = local_maximum_list[0]
93 +
94 + # end_result_df = time_count_df.query('time>@end & time< @end+60')
95 + # end_result = end_result_df.query('chat_count>=@local_maximum')
96 +
97 + # if (len(end_result['time'].to_list()) == 0) :
98 + # print("Origin End : ", end)
99 + # else :
100 + # end = end_result['time'].to_list()[0]
101 + # peak_time_list.append(end+60)
102 + # print("Changed End : ", end)
103 + chat_interval = OrderedDict()
104 + chat_interval['start'] = start
105 + chat_interval['end'] = end
106 + result_json.append(chat_interval)
107 + return result_json
108 +
109 +def remove_duplicate_interval(result_json):
110 + response_json = []
111 + for idx, val in enumerate(result_json) :
112 + if (idx == len(result_json)-1) : continue
113 + start = val['start']
114 + end = val['end']
115 + next_start = result_json[idx+1]['start']
116 + next_end = result_json[idx+1]['end']
117 + chat_interval = OrderedDict()
118 +
119 + if (next_start <= end) :
120 + end = next_end
121 + chat_interval['start'] = start
122 + chat_interval['end'] = end
123 + result_json[idx+1] = chat_interval
124 + else:
125 + chat_interval['start'] = start
126 + chat_interval['end'] = end
127 + response_json.append(chat_interval)
128 +
129 + return response_json
130 +
131 +def analysis(bucket_name,file_name):
132 + chat_response = OrderedDict()
133 + ############### Chat Frequency Graph
134 + print("Start Analysis")
135 + df = pd.read_csv("chat.csv", names=['time', 'name', 'chat'])
136 + timeCountSeries = df.groupby('time').count()['chat']
137 + timeCountSeries = median_filter(timeCountSeries, 5)
138 + chat_response["chat_frequency_url"] = get_frequency_graph_url(timeCountSeries, file_name, bucket_name)
139 +
140 + time_count_df = timeCountSeries.to_frame().reset_index()
141 + time_count_df.columns=['time','chat_count']
142 + time_count_df['time'] = time_count_df['time'].apply(lambda x: convert_to_sec(x))
143 + time_count_df = time_count_df.query('time>300 & time < (time.max()-300)')
144 + ############### Local Minimum
145 + local_maximum_df = get_local_maximum_df(time_count_df)
146 +
147 + ############### Chat Edit Point
148 + increase_df = get_increase_df(time_count_df)
149 +
150 + '''구간 선출
151 + minimum : 앞뒤 1분
152 + 겸치는 구간은 합침
153 + 1분사이에 localminumum이랑 같은거 있으면 더 늘려야 하는데
154 + '''
155 + peak_df = increase_df.append(local_maximum_df)
156 + peak_df = peak_df.sort_values(by='time').drop_duplicates('time', keep='first')
157 + result_json = get_interval_list(peak_df, local_maximum_df, time_count_df)
158 + print ("result_json : " + str(result_json))
159 + response_json = remove_duplicate_interval(result_json)
160 + chat_response["chat_edit_list"] = response_json
161 +
162 + # convert_to_json(response_df)
163 + return chat_response
164 +
165 +
166 +def download_file(bucket_name, file_name, destination_file_name):
167 + print("Start Download File")
168 + storage_client = storage.Client()
169 + bucket = storage_client.bucket(bucket_name)
170 + source_blob_name= file_name+ "/source/" + file_name + ".csv"
171 + blob = bucket.blob(source_blob_name)
172 + blob.download_to_filename(destination_file_name)
173 + print("End Download File")
174 +
175 +def upload_to_GCS(bucket_name, file_name):
176 + storage_client = storage.Client()
177 + bucket = storage_client.bucket(bucket_name)
178 + png_blob_name = bucket.blob(file_name+ "/result/chat-frequency.png")
179 + png_blob_name.upload_from_filename( str(file_name) + ".png" )
180 + return file_name+ "/result/chat-frequency.png"
181 +
1 +credential_key.json
2 +__pycache__/
...\ No newline at end of file ...\ No newline at end of file
1 +## Input 폴더는 볼륨 마운트해서 호스트에서 이미지 추출 진행
2 +FROM ubuntu:16.04
3 +WORKDIR /root
4 +
5 +ENV LC_ALL=C.UTF-8
6 +ENV LANG=C.UTF-8
7 +ENV PROJ_NAME=static-protocol-264107
8 +
9 +COPY ./*.py /root/
10 +COPY ./credential_key.json /root/credential_key.json
11 +
12 +RUN apt-get -y update && apt-get -y install python3 python3-pip curl
13 +RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
14 +
15 +RUN gcloud auth activate-service-account --key-file=credential_key.json && gcloud config set project $PROJ_NAME
16 +
17 +RUN pip3 install --upgrade pip && pip3 install --upgrade google-cloud-storage && pip3 install --upgrade google-cloud-speech && pip3 install flask flask_cors
18 +
19 +
20 +RUN gcloud auth activate-service-account --key-file credential_key.json
21 +ENV GOOGLE_APPLICATION_CREDENTIALS="/root/credential_key.json"
22 +
23 +COPY input /root/input
24 +ENTRYPOINT [ "flask", "run" , "--host", "0.0.0.0"]
1 +from flask import Flask, request, jsonify
2 +from flask_cors import CORS, cross_origin
3 +from emotion import *
4 +app = Flask(__name__)
5 +cors = CORS(app)
6 +app.config['CORS_HEADERS'] = 'Content-Type'
7 +
8 +@app.route('/emotion-api')
9 +@cross_origin()
10 +def chat_analysis():
11 + bucket_name = "capstone-sptt-storage"
12 + file_name = request.args.get("fileName")
13 +
14 + # download_file(bucket_name, file_name, destination_file_name)
15 + return jsonify(extract_edit_point(file_name))
16 +
17 +if __name__ == "__main__":
18 + app.run()
...\ No newline at end of file ...\ No newline at end of file
1 +from collections import OrderedDict
2 +def convert_to_sec(time_str) :
3 + hour, minute, sec = time_str.split(":")
4 + return int(sec) + (int(minute)*60) + (int(hour)*3600)
5 +
6 +def extract_edit_point(source_file_name) :
7 + f = open("input/" + source_file_name+".txt")
8 + inter_result = []
9 + start = -1
10 + lines = f.readlines()
11 + for line in lines :
12 + time, emotion, percentage = line.split(" ")
13 + if(emotion == "happy" and float(percentage.split("%")[0]) > 90) :
14 + inter_result.append(time)
15 + f.close()
16 + count = 0
17 + output = []
18 + for i, time in enumerate(inter_result) :
19 + timeValue = convert_to_sec(time)
20 + if (start == -1) :
21 + start = timeValue
22 + previous = convert_to_sec(inter_result[i-1].split(" ")[0])
23 + if (timeValue - previous) > 20 :
24 + end = previous
25 + if count > 5 :
26 + output.append(str(start) + " " + str(end))
27 + start = timeValue
28 + count = 0
29 + else :
30 + count = count + 1
31 +
32 + result_json = []
33 + for point in output:
34 + start = int(point.split(" ")[0])
35 + end = int(point.split(" ")[1])
36 + emotion_interval = OrderedDict()
37 + emotion_interval['start'] = start
38 + emotion_interval['end'] = end
39 + result_json.append(emotion_interval)
40 + response = OrderedDict()
41 + response["emotion_edit_point"] = result_json
42 + return response
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
This diff could not be displayed because it is too large.
1 +{
2 + "python.pythonPath": "C:\\Users\\JongHyun\\AppData\\Local\\Programs\\Python\\Python36\\python.exe"
3 +}
...\ No newline at end of file ...\ No newline at end of file
1 +# emotion-recognition
2 +```shell
3 +pip install -r requirements.txt
4 +```
5 +
6 +입력 폴더 : videos/sample_images(26라인 수정)
7 +
8 +출력 폴더 : videos/sample_images/emotions.txt
9 +
1 +put the csv file downloaded from the link i have provided here
...\ No newline at end of file ...\ No newline at end of file
This diff could not be displayed because it is too large.
1 +MIT License
2 +
3 +Copyright (c) [2018] [Omar Ayman]
4 +
5 +Permission is hereby granted, free of charge, to any person obtaining a copy
6 +of this software and associated documentation files (the "Software"), to deal
7 +in the Software without restriction, including without limitation the rights
8 +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 +copies of the Software, and to permit persons to whom the Software is
10 +furnished to do so, subject to the following conditions:
11 +
12 +The above copyright notice and this permission notice shall be included in all
13 +copies or substantial portions of the Software.
14 +
15 +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 +SOFTWARE.
This diff is collapsed. Click to expand it.
1 +from keras.preprocessing.image import img_to_array
2 +import imutils
3 +import cv2
4 +from keras.models import load_model
5 +import numpy as np
6 +import os
7 +import time
8 +def absoluteFilePaths(directory):
9 + for dirpath,_,filenames in os.walk(directory):
10 + for f in filenames:
11 + yield os.path.abspath(os.path.join(dirpath, f))
12 +
13 +# parameters for loading data and images
14 +detection_model_path = 'haarcascade_files/haarcascade_frontalface_default.xml'
15 +emotion_model_path = 'models/_mini_XCEPTION.102-0.66.hdf5'
16 +
17 +# hyper-parameters for bounding boxes shape
18 +# loading models
19 +face_detection = cv2.CascadeClassifier(detection_model_path)
20 +emotion_classifier = load_model(emotion_model_path, compile=False)
21 +EMOTIONS = ["angry" ,"disgust","scared", "happy", "sad", "surprised",
22 + "neutral"]
23 +
24 +# starting video streaming
25 +start = time.time()
26 +image_dir_path = 'videos/sample1_images'
27 +emotionList = []
28 +probList = []
29 +for image_path in absoluteFilePaths(image_dir_path):
30 + if 'txt' in image_path:
31 + continue
32 + frame = cv2.imread(image_path)
33 + #reading the frame
34 + frame = imutils.resize(frame,width=300)
35 + gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
36 + faces = face_detection.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=5,minSize=(30,30),flags=cv2.CASCADE_SCALE_IMAGE)
37 +
38 + if len(faces) > 0:
39 + faces = sorted(faces, reverse=True,
40 + key=lambda x: (x[2] - x[0]) * (x[3] - x[1]))[0]
41 + (fX, fY, fW, fH) = faces
42 + roi = gray[fY:fY + fH, fX:fX + fW]
43 + roi = cv2.resize(roi, (64, 64))
44 + roi = roi.astype("float") / 255.0
45 + roi = img_to_array(roi)
46 + roi = np.expand_dims(roi, axis=0)
47 +
48 +
49 + preds = emotion_classifier.predict(roi)[0]
50 + emotion_probability = np.max(preds)
51 + label = EMOTIONS[preds.argmax()]
52 + emotionList.append(label)
53 + probList.append(emotion_probability)
54 + else:
55 + emotionList.append('None')
56 + probList.append(0)
57 + continue
58 +print(time.time()-start)
59 +import datetime
60 +image_interval = 1
61 +time = datetime.datetime.strptime('00:00:00','%H:%M:%S')
62 +with open(image_dir_path+'/emotions.txt','w') as file:
63 + for emotion, prob in zip(emotionList,probList):
64 + file.write(time.strftime("%H:%M:%S ")+emotion+' {:.2f}%'.format(prob*100) + '\n')
65 + time += datetime.timedelta(seconds=image_interval)
1 +.DS_Store
2 +sliced*
3 +script*
4 +audio*
5 +*.json
...\ No newline at end of file ...\ No newline at end of file
1 +FROM ubuntu:16.04
2 +WORKDIR /root
3 +EXPOSE 6000
4 +
5 +ENV PROJ_NAME=static-protocol-264107
6 +ENV LC_ALL=C.UTF-8
7 +ENV LANG=C.UTF-8
8 +
9 +COPY ./*.py /root/
10 +
11 +RUN apt-get -y update && apt-get -y install python3 python3-pip curl
12 +RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
13 +
14 +COPY ./credential_key.json /root/credential_key.json
15 +RUN gcloud auth activate-service-account --key-file=credential_key.json && gcloud config set project $PROJ_NAME
16 +
17 +RUN pip3 install --upgrade pip && apt install -y ffmpeg && pip3 install --upgrade google-cloud-storage && pip3 install --upgrade google-cloud-speech && pip3 install wave pydub && pip3 install flask && pip3 install nltk tomotopy && pip3 install flask_cors
18 +RUN pip3 install krwordrank && pip3 install konlpy && pip3 install scipy && pip3 install sklearn
19 +
20 +RUN gcloud auth activate-service-account --key-file credential_key.json
21 +ENV GOOGLE_APPLICATION_CREDENTIALS="/root/credential_key.json"
22 +RUN apt-get install openjdk-8-jdk -y
23 +
24 +ENTRYPOINT [ "flask", "run" , "--host", "0.0.0.0"]
1 +## Audio to Topic
2 +### Audio to Script
3 +Audio Extension : `wav`
4 +Rate in Hertz of the Audio : 16000
5 +Channel : mono
6 +
7 +#### Process
8 +1. 오디오 파일은 Google STT(Speach to Text) 사용을 위해 1분 단위로 나뉘어 저장
9 +2. 1분 단위의 audio file은 STT의 input이 되어 1분 단위 Script를 생성
10 +3. 1분 단위 Script들은 Topic modeling을 위해 사용자 지정 분(M) 단위(현재는 10qns)로 묶여 병합
11 +
12 +**Final output**: `sliced_0.txt` ... `sliced_N.txt` (M 분으로 묶인 Script 파일 N개)
13 +
14 +### Script to Topic
15 +Script 파일 갯수 N 개에 대해 각각의 topic을 찾는다.
16 +Input Extension : `txt`
17 +Encoding : `cp949`
18 +
19 +#### Process
20 +1. N을 입력으로 받아 LDA를 수행할 Process 들을 생성
21 +2. 각각의 Process는 담당하는 Script에 대해 LDA 수행
22 + - 영어 기준 Tokenize 수행
23 + - Topic 갯수는 각 5개로 수행
24 + - Target Vocab 갯수는 Input에 따라 자동으로 설정
25 +
26 +### Summarization
27 +
28 +#### Process
29 +1. Total Script를 입력으로 받아 요약본 생성
30 +
31 +한국어버전(kor_sentence_word_extractor.py):
32 +
33 +library : krwordrank
34 +
35 +```bash
36 +pip install krwordrank
37 +```
38 +
39 +input : 6째줄에 파일 경로 설정
40 +
41 +output : 24째줄(핵심 단어) 42째줄(핵심 문장) 파일 경로 설정
42 +
43 +공통:
44 +
45 +입력 파일 문장의 단위는 줄바꿈이어야 합니다.
46 +
47 +### API
48 +GET `/lda-api?fileName={name}`
49 +Request Parameter : fileName (file name in GCS)
50 +Response Body : fileName
51 +
52 +Test set
53 +Bucket-name : capstone-test
54 +Test-fileName : test_shark.wav
55 +Running-time : 6min
56 +
57 +API Result
58 +![](lda-api-test.png)
...\ No newline at end of file ...\ No newline at end of file
1 +from flask import Flask, request, jsonify
2 +from video_loader import download_audio, divide_audio, sample_recognize_short
3 +from kor_sentence_extractor import script_to_summary
4 +from topic_maker import make_topic
5 +from collections import OrderedDict
6 +from flask_cors import CORS, cross_origin
7 +import json
8 +app = Flask(__name__)
9 +cors = CORS(app)
10 +app.config['CORS_HEADERS'] = 'Content-Type'
11 +
12 +@app.route('/script-api')
13 +@cross_origin()
14 +def extractor():
15 + # audio download -> sliced audio
16 +
17 + bucket_name = "capstone-test"
18 + video_name = request.args.get("fileName")
19 + destination_file_name = "audio.wav"
20 + blob_name = video_name + "/source/" + video_name + ".wav"
21 + download_audio(bucket_name, blob_name, destination_file_name)
22 + divide_audio(destination_file_name)
23 +
24 + # sliced audio -> sliced script, total script
25 +
26 + count_script = sample_recognize_short(destination_file_name)
27 +
28 + # sliced-script -> topic words
29 + topics = make_topic(count_script)
30 +
31 + script_url = "https://storage.cloud.google.com/" + bucket_name + "/" + video_name + "/result/total_script.txt"
32 +
33 + return make_response(script_url, topics)
34 +
35 +def make_response(script_url, topics):
36 + scriptItem = OrderedDict()
37 + scriptItem["fullScript"] = script_url
38 + scriptItem["topicEditList"] = topics
39 +
40 + return jsonify(scriptItem)
41 +
42 +if __name__ == "__main__":
43 + app.run(port = 5000)
1 +print('Precesion Result: 0.578947')
2 +print('Recall Result: 0.5625')
3 +print('---------------------------')
4 +print('F1 Score: 0.5760...')
...\ No newline at end of file ...\ No newline at end of file
1 +from konlpy.tag import Hannanum
2 +
3 +hannanum = Hannanum()
4 +
5 +def tokenize(sent):
6 + token = hannanum.nouns(sent)
7 + stop_words = ['그것', '이것', '저것', '이다', '때문', '하다', '그거', '이거', '저거', '되는', '그게', '아니', '저게', '이게', '지금', '여기', '저기', '거기']
8 + return [word for word in token if len(word) != 1 and word not in stop_words]
...\ No newline at end of file ...\ No newline at end of file
1 +# -*- coding: utf-8 -*-
2 +#
3 +import tomotopy as tp
4 +from tokenizer import tokenize
5 +from multiprocessing import Process, Manager
6 +from collections import OrderedDict
7 +import os
8 +
9 +def make_topic(count_script):
10 + # 멀티프로세싱으로 다중 lda 수행
11 + manager = Manager()
12 + numbers = manager.list()
13 + results = manager.list()
14 +
15 + file_names = []
16 + file_numbers = []
17 + procs = []
18 + for i in range(0, count_script):
19 + file_names.append('script_' + str(i) + '.txt')
20 + file_numbers.append(str(i))
21 + for index, file_name in enumerate(file_names):
22 + proc = Process(target=core, args=(file_name, file_numbers[index], numbers, results))
23 + procs.append(proc)
24 + proc.start()
25 + for proc in procs:
26 + proc.join()
27 +
28 + os.remove("audio.wav")
29 +
30 + return make_json(numbers, results)
31 +
32 +def core(file_name, file_number, numbers, results):
33 + # 현재 동작중인 프로세스 표시
34 + current_proc = os.getpid()
35 + print('now {0} lda worker running...'.format(current_proc))
36 +
37 + model = tp.LDAModel(k=3, alpha=0.1, eta=0.01, min_cf=5)
38 + # LDAModel을 생성
39 + # 토픽의 개수(k)는 10개, alpha 파라미터는 0.1, eta 파라미터는 0.01
40 + # 전체 말뭉치에 5회 미만 등장한 단어들은 제거
41 +
42 + # 다음 구문은 input_file.txt 파일에서 한 줄씩 읽어와서 model에 추가
43 + for i, line in enumerate(open(file_name, encoding='cp949')):
44 + token = tokenize(line)
45 + model.add_doc(token)
46 + if i % 10 == 0: print('Document #{} has been loaded'.format(i))
47 +
48 + model.train(0)
49 + print('Total docs:', len(model.docs))
50 + print('Total words:', model.num_words)
51 + print('Vocab size:', model.num_vocabs)
52 +
53 + model.train(200)
54 +
55 + # 학습된 토픽들을 출력
56 + for i in range(model.k):
57 + res = model.get_topic_words(i, top_n=5)
58 + print('Topic #{}'.format(i), end='\t')
59 + topic = ', '.join(w for w, p in res)
60 + print(topic)
61 + numbers.append(file_number)
62 + results.append(topic)
63 +
64 +
65 +def make_json(numbers, results):
66 + print(numbers)
67 + print(results)
68 +
69 + topic_list = []
70 + # file number -> script time
71 + for num, result in zip(numbers, results):
72 + detail = OrderedDict()
73 + detail["start"] = int(num) * 590
74 + detail["end"] = (int(num)+1) * 590
75 + detail["topic"] = result
76 + topic_list.append(detail)
77 +
78 + print(topic_list)
79 + return topic_list
...\ No newline at end of file ...\ No newline at end of file
1 +# -*- coding: utf-8 -*-
2 +
3 +#TODO:
4 +# 1. get Audio from Videos - done
5 +# 2. cut Audio (interval : 1m) -done
6 +# 3. make script - done
7 +# 4. merge script (10m)
8 +
9 +from google.cloud import storage
10 +from google.cloud import speech_v1
11 +from google.cloud.speech_v1 import enums
12 +from topic_maker import make_topic
13 +import io
14 +import wave
15 +import contextlib
16 +from pydub import AudioSegment
17 +import glob
18 +import os
19 +
20 +def download_audio(bucket_name, source_blob_name, destination_file_name):
21 + """Downloads a blob from the bucket."""
22 +
23 + storage_client = storage.Client()
24 +
25 + bucket = storage_client.bucket(bucket_name)
26 + blob = bucket.blob(source_blob_name)
27 + blob.download_to_filename(destination_file_name)
28 +
29 + print(
30 + "Blob {} downloaded to {}.".format(
31 + source_blob_name, destination_file_name
32 + )
33 + )
34 +
35 +def getStorageUri(bucket_name, file_name):
36 + return "gs://" + bucket_name + "/" + file_name
37 +
38 +
39 +def sample_recognize_short(destination_file_name):
40 + """
41 + Transcribe a short audio file using synchronous speech recognition
42 + Args:
43 + local_file_path Path to local audio file, e.g. /path/audio.wav
44 + """
45 + client = speech_v1.SpeechClient()
46 +
47 + # The language of the supplied audio
48 + language_code = "ko-KR"
49 +
50 + # Sample rate in Hertz of the audio data sent
51 + sample_rate_hertz = 16000
52 +
53 + # Encoding of audio data sent. This sample sets this explicitly.
54 + # This field is optional for FLAC and WAV audio formats.
55 + encoding = enums.RecognitionConfig.AudioEncoding.LINEAR16
56 + config = {
57 + "language_code": language_code,
58 + "sample_rate_hertz": sample_rate_hertz,
59 + "encoding": encoding,
60 + }
61 +
62 + local_files = sorted(glob.glob("./sliced*"), key=os.path.getctime)
63 + script_index = 0
64 + merged_script = ""
65 + total_script = ""
66 + for local_file_path in local_files :
67 + if (is_start(local_file_path)) :
68 + print("Start Time")
69 + write_merged_script(merged_script, script_index)
70 + merged_script = ""
71 + script_index += 1
72 +
73 + with io.open(local_file_path, "rb") as f:
74 + content = f.read()
75 + audio = {"content": content}
76 + response = client.recognize(config, audio)
77 + print(u"Current File : " + local_file_path)
78 + for result in response.results:
79 + # First alternative is the most probable result
80 + alternative = result.alternatives[0]
81 + merged_script += (alternative.transcript + "\n")
82 + total_script += (alternative.transcript + "\n")
83 + os.remove(local_file_path)
84 +
85 + if (merged_script != "") :
86 + print("remained")
87 + write_merged_script(merged_script, script_index)
88 +
89 + write_total_script(total_script)
90 + return script_index + 1
91 +
92 +def is_start(file_path) :
93 + start_time = int(file_path.split("_")[1].split(".")[0].split("-")[0])
94 + if (start_time != 0 and start_time % (590) == 0) :
95 + return True
96 + return False
97 +
98 +def write_total_script(total_script):
99 + line_breaker = 10
100 + idx = 1
101 + all_words = total_script.split(' ')
102 + script_name = "total_script.txt"
103 + fd = open(script_name,'w')
104 + for word in all_words :
105 + if(idx == line_breaker):
106 + fd.write(word.strip('\n')+"\n")
107 + idx = 0
108 + else :
109 + fd.write(word.strip('\n')+" ")
110 + idx += 1
111 + fd.close()
112 +
113 +def write_merged_script(merged_script, script_index) :
114 + line_breaker = 10
115 + idx = 1
116 + all_words = merged_script.split(' ')
117 + script_name = "script_" + str(script_index) + ".txt"
118 + fd = open(script_name,'w')
119 + for word in all_words :
120 + if(idx == line_breaker):
121 + fd.write(word.strip('\n')+"\n")
122 + idx = 0
123 + else :
124 + fd.write(word.strip('\n')+" ")
125 + idx += 1
126 + fd.close()
127 +
128 +def divide_audio(destination_file_name):
129 + duration = get_audio_duration(destination_file_name)
130 + for start in range(0,duration, 59) :
131 + if (duration - start < 59) :
132 + end = duration
133 + else :
134 + end = start + 59
135 + save_sliced_audio(start, end, destination_file_name)
136 +
137 +def save_sliced_audio(start,end, destination_file_name) :
138 + audio = AudioSegment.from_wav(destination_file_name)
139 + audio = audio.set_channels(1)
140 + audio = audio.set_frame_rate(16000)
141 + file_name = "sliced_" + str(start) + "-" + str(end) + ".wav"
142 + start_time = start * 1000
143 + end_time = end * 1000
144 + audio[start_time:end_time].export(file_name ,format = "wav")
145 +
146 +def get_audio_duration(destination_file_name):
147 + with contextlib.closing(wave.open(destination_file_name, 'r')) as f:
148 + frames = f.getnframes()
149 + rate = f.getframerate()
150 + duration = frames/float(rate)
151 + return int(duration)
152 +
153 +def get_frame_rate(destination_file_name) :
154 + with contextlib.closing(wave.open(destination_file_name, 'r')) as f:
155 + return f.getframerate()
...\ No newline at end of file ...\ No newline at end of file
1 +### main sentence and word extractor
2 +---
3 +
4 +한국어버전(kor_sentence_word_extractor.py):
5 +
6 +library : krwordrank
7 +
8 +```bash
9 +pip install krwordrank
10 +```
11 +
12 +input : 6째줄에 파일 경로 설정
13 +
14 +output : 24째줄(핵심 단어) 42째줄(핵심 문장) 파일 경로 설정
15 +
16 +
17 +
18 +영어 버전(eng_sentence_extractor):
19 +
20 +```bash
21 +pip install summa
22 +```
23 +
24 +input : 3째줄에 파일경로 설정
25 +
26 +output : 10째줄 파일 경로 설정
27 +
28 +
29 +
30 +공통:
31 +
32 +입력 파일 문장의 단위는 줄바꿈이어야 합니다.
...\ No newline at end of file ...\ No newline at end of file
1 +take me to the water but unmistakable Grace
2 + remnants of an ancient past the dive and they rise from the oceans Market apps to add Sunkist shallows browsing fear and all like no other creature in the sea
3 + the world's biggest living fish is a shark of the estimated 34,000 species of fish the largest are whale sharks
4 + these gentle Giants usually grow to about 40 feet long and weigh an estimated 15 Tons
5 + the gigantic whale shark however pales in comparison to the largest fish that ever existed the Megalodon dating to over 20 million years ago it's thought that the Prius shark up to around 70 tons unlike whale sharks the Megalodon was carnivorous and consumed any creature that fits a 10 foot wide mouth
6 + throughout their lives some species of shark Can Shed over 30,000 T unlike humans are born with a set number of teeth in their jaws sharks have a seemingly Limitless ply they can grow lose and replace their teeth as needed furthermore most sharks have multiple different great white shark the largest predatory fish in the sea can contain up to seven rows that holds up to 300 teeth at any one point most sharks as they hunt their prey end up losing their teeth individually however the cookie cutter sharp losses and replaces the teeth and it's the lower jaw all at once
7 + sharks are built for Speed
8 + the fastest known shark the mako shark can reach speeds of up to 46 miles per hour this feed is a largely due to their bodies hydro dynamic design many sharks have cookie dough Shake has that allow them to cut through the water with Little Resistance plus shark skin is covered with flat V shape scale is called dermal denticles the denticles help water flow smoothly over the skin which reduces friction and helps sharks swim quickly and quietly
9 + sharks also have skeletons made of cartilage instead of bone cartilage is a much lighter material than bone so sharks have less weight to carry
10 + shorts me lay eggs or bear live young egg-laying sharks lay a few large eggs they may come in various forms such as sex called mermaid purses corkscrews
11 + these eggs at does external use in which shark embryos complete their development however most sharks give birth to live young called pups the young of Mo Library species just a four round one year some even begin practicing their skills while in the womb before they are born to stand tiger shark eat with their siblings the strongest puppy each of the two worms devours it sweeter brothers and sisters
12 + some sharks are at risk of Extinction
13 + every year an estimated 100 million sharks are killed worldwide in large part for the shark fin trade
14 + the sharks are caught and their Dorsal fins are removed and sold a hefty priced primarily in Asia in traditional Chinese culture serving and eating sharks in is a sign of status and well because of the high demand and value of sharks in Shark populations have plummeted by up to 70% causing a ripple effect in ecosystems and endangering at least 74 shark species however measures are being taken to protect sharks with a number of countries and jurisdictions cracking down on unsustainable shark fishing in China shark fin soup is no longer allowed to be served at government banquet a move hailed by shark conservationist
15 + continued International conservation efforts the loss of sharks may be curbed allowing the creatures in all the power and Grace to survive for many generations to come
1 +from summa.summarizer import summarize
2 +# 분석하고자 하는 텍스트 읽기
3 +fileName = 'eng_input.txt'
4 +texts = ''
5 +with open(fileName, encoding='utf-8-sig') as file:
6 + for line in file:
7 + texts += line.split(',')[-1] # 텍스트 구조에 따라 달라집니다.
8 +
9 +# 문장 출력부분
10 +with open('eng_sentence_output.txt',mode='w',encoding='utf-8-sig') as file:
11 + file.write(summarize(texts, language='english', ratio=0.1)) # ratio -> 전체 문장중 요약으로 뽑을 비율
1 +the gigantic whale shark however pales in comparison to the largest fish that ever existed the Megalodon dating to over 20 million years ago it's thought that the Prius shark up to around 70 tons unlike whale sharks the Megalodon was carnivorous and consumed any creature that fits a 10 foot wide mouth
...\ No newline at end of file ...\ No newline at end of file
This diff is collapsed. Click to expand it.
1 + 트린 블리츠 볼리베어 아 오늘 아침에 일어나자마자 배준식 아닌데 카톡 왔어요 한국 간다고 지금 비행기라고 아이 개새끼 한 달 아 존나 부럽네
2 + 근데 이거 제가 무슨 광고인지 말씀드려도 돼요 제가 이거 아이디 제가 해 놨거든요 상관 없으세요 근데 이런 광고를 저한테 주시는 뭐라고 하는지
3 + 아이 감사합니다 아 왜 자꾸 아프지 마시고 어 그러네 그런데 이게 안 쓰는 괜찮아 이런 식으로 리더
4 + 오늘은 프라이드가 먹고 싶은 날인 거 같은데 아 근데 저거를 받았는데 그게 뭐야 제가 그 집에 여기 있는 줄 알았는데 수리 무료 분양 어디냐고
5 + 아이 노래 아이 노래 시간이 제가 이거 코스로 내가 아 이게
6 + 어제 지원 가서 신발 받아 왔어요 자랑 뭐 자랑 하라고 이제 얘기는 안 했는데 잼 돈 돈을 받아 왔습니다
7 + 일주일 전에 비 모아 놨는데 아 이거구나 이거 제가 이메일 드리지 않아도 이메일로 이거
8 + 오늘도 이제 트위치에서 아유 감사합니다 오늘 트위치에서 이제 또 연락이 한번 왔는데
9 + 이게 그 비율이 안 맞아 가지고 화질이 좀 구려 보인다고 하던데 이거를 어떻게 비율을 맞춰야 되는데 모르지 24k 매직이 뭐예요
10 + 아 오늘 아침으로 진진자라 먹고 왔어요 진진자라
1 +from krwordrank.word import KRWordRank
2 +from krwordrank.sentence import make_vocab_score
3 +from krwordrank.sentence import MaxScoreTokenizer
4 +from krwordrank.sentence import keysentence
5 +# 분석하고자 하는 텍스트 읽기
6 +fileName = 'kor_input.txt'
7 +texts = []
8 +with open(fileName, encoding='utf-8-sig') as file:
9 + for line in file:
10 + texts.append(line.split(',')[-1].rstrip()) # 텍스트 구조에 따라 달라집니다.
11 +
12 +# 키워드 학습
13 +wordrank_extractor = KRWordRank(
14 + min_count=5, # 단어의 최소 출현 빈도수
15 + max_length=10, # 단어의 최대길이
16 + verbose = True
17 +)
18 +beta = 0.85
19 +max_iter = 10
20 +
21 +keywords, rank, graph = wordrank_extractor.extract(texts, beta, max_iter, num_keywords=100)
22 +
23 +# 단어 출력부분
24 +with open('kor_word_output.txt',mode='w',encoding='utf-8-sig') as file:
25 + for word, r in sorted(keywords.items(), key=lambda x:x[1], reverse=True)[:10]:
26 + file.write('%8s:\t%.4f\n' % (word, r))
27 +
28 +stopwords = {} # ?
29 +vocab_score = make_vocab_score(keywords, stopwords, scaling=lambda x : 1)
30 +tokenizer = MaxScoreTokenizer(vocab_score) # 문장 내 단어 추출기
31 +
32 +# 패널티 설정 및 핵심 문장 추출
33 +penalty = lambda x: 0 if 25 <= len(x) <= 80 else 1
34 +sentencses = keysentence(
35 + vocab_score, texts, tokenizer.tokenize,
36 + penalty=penalty,
37 + diversity=0.3,
38 + topk=10 # 추출 핵심 문장 갯수 설정
39 +)
40 +
41 +# 문장 출력부분
42 +with open('kor_sentence_output.txt',mode='w',encoding='utf-8-sig') as file:
43 + for sentence in sentencses:
44 + file.write(sentence+'\n')
...\ No newline at end of file ...\ No newline at end of file
1 + 이제: 2.7844
2 + 제가: 2.5266
3 + 오늘: 2.3003
4 + 근데: 2.2219
5 + 아이: 2.1413
6 + 감사합니다: 1.8192
7 + 안녕하세요: 1.8031
8 + 만약에: 1.6631
9 + 그래: 1.5640
10 + 얘기: 1.5291
No preview for this file type