Install git: Git for Windows , Git for Mac
git clone https://github.com/govmeeting/govmeeting.git
[ To contribute, it's better to fork and clone your fork in Github. ]
Install Node.js.
Install .Net Core SDK.
Install FFmpeg. . This is for processing audio & video files.
There are three separate applications:
When ClientApp starts, it checks whether WebApp is running. If not, it uses test data, instead of making API calls to the backend. This allows frontend code to be developed independently.
WorkflowApp runs as a standalone process. It:
Depending on your preference, you can build, run and develop using:
Below are procedures for each of these.
Notes:
If you only run clientapp, you can open a browser to "localhost:4200" to see the app. Clientapp will recognize that WebApp is not running and it will use internal test data.
But if you also run WebApp, WebApp will open a browser automatically to "localhost:44333" and display the clientapp. In this case it is using a proxy to the dev server running on "localhost:4200".
In Task Runner Explorer - Solution, run either:
This is the Anglar front end SPA.
This is the .NET Web API server.
This standalone performs batch jobs such as downloading, processing and transcribing meeting recordings.
This is the .NET Web API server.
This standalone app performs batch jobs such as downloading, processing and transcribing meeting recordings.
When WorkflowApp first starts, it creates a folder "DATAFILES" and within it the following 3 sub-folders:
The following setting within appsettings.json tells it to copy test files to DATAFILES. The test files include a sample PDF transcript and an MP4 recording of meeetings.
"InitializeWithTestData": true,
WorkflowApp pre-processes the transcript and produces a JSON file with the extracted data. If you have set up a Google Cloud account, it will transcribe the MP4 recording. You will find the results of both in the DATAFILES folder.
You will note that the initial MP4 transcript and its transcription are split into 3-minute work segments. This is to allow multiple volunteers to work simultaneously on proofreading the transcription.
Besides the test files on Google Drive, you can process your own recordings of meetings:
If you have an Google Account set up, it will transcribe the recording.
The goal is to eventually write code smart enough to process all transcript formats. But for now we need to add custom code for new formats. If your city, town, etc, produces transcripts of their meetings, it would be of great help if you contribute the code to handle those. Please see Github Issue #93
You may not need to install and setup the database in order to do development. There are test stubs that substitute for calling database. See "Test Stubs" below.
If you are using Visual Studio or Visual Studio Code, the Sql Server Express LocalDb provider is already installed. Otherwise do "LocalDb Provider Installation" below.
Go to Sql Server Express. For Windows, download the "Express" specialized edition of SQL Server. During installation, choose "Custom" and select "LocalDb".
LocalDb is available also for MacOs and Linux. If you install it for either platform, please update this document with the steps and do a Pull Request.
Besides LocalSb, EF Core supports other providers, which you can use for development, including SqlLite. You will need to modify the DbContext setup in startup.cs and the connection string in appsettings.json.
The database is built via the "code first" feature of Entity Framework Core. It examines the C# classes in the data model and automatically creates all tables and relations. There are two steps: (1) Create the "migrations" code for doing the update and (2) execute the update.
Add the following to your user settings.json in VsCode:
"mssql.connections": [
{
"server": "(localdb)\\mssqllocaldb",
"database": "Govmeeting",
"authenticationType": "Integrated",
"profileName": "GMProfile",
"password": ""
}
],
There is the cross-platform and open source SQL Operations Studio, "a data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux." You can download SQL Operations Studio free here.
If you use this, or another tool, for exploring SQL Server databases, please update these instructions.
The code to store/retrieve transcript data in the database is not yet written. Therefore DatabaseRepositories_Lib uses static test data instead. In WebApp/appsettings.json, the property "UseDatabaseStubs" is set to "true", telling it to call the stub routines.
However the user registration and login code in WebApp does use the database. It accesses the Asp.Net user authentication tables. WebApp authenticates API calls from clientapp based on the current logged in user.
You can use the "NOAUTH" pre-processor value in WebApp to bypass authentication. Use one of these methods:
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|AnyCPU'>
<DefineConstants>NOAUTH</DefineConstants>
</PropertyGroup>
To use the Google Speech APIs for speech-to-text conversion, you need a Google Cloud Platform (GCP) account. For most development work in Govmeeting, you can use use existing test data. But if you want transcribe new recordings, you will a GCP. The Google API is able to transcribe recordings in more than 120 languages and variants.
Google provides developers with a free account which includes a credit (currently $300). The current cost of using the Speech API is free for up 60 minutes of conversion per month. After that, the cost for the "enhanced model" (which is what we need) is $0.009 per 15 seconds. ($2.16 per hour)
Open an account with Google Cloud Platform
Go to the Google Cloud Dashboard and create a project.
Go to the Google Developer's Console and enable the Speech & Cloud Storage APIs
Generate a "service account" credential for this project. Click on Credentials in developer's console.
Give this account "Editor" permissions on the project. Click on the account. On the next page, click Permissions.
Download the credential JSON file.
Create a SECRETS
folder as sibling to the cloned project folder
Put the credential file in SECRETS and rename it TranscribeAudio.json
.
Set the startup project in Visual Studio to src/Workflow/WorkflowApp
. Press F5.
Copy (don't move) one of the sample MP4 files from testdata to DATAFILES/RECEIVED.
The program will now recognize that a new file has appeared and start processing it. The MP4 file will be moved to "COMPLETED" when done. You will see the results in sufolders, which were created in the "DATAFILES" directory.
In appsettings.json, there is a property "MaxRecordingSize". It is currently set to "180". This causes the transcription routine in ProcessRecording_Lib to process only the first 180 seconds of the recording.
You will need these keys if you want to use or work on certain features of the registration and login process.
Create a SECRETS
folder as sibling to the cloned project folder. Create a file in it named "appsettings.Development.json", with the following format.
{
"ExternalAuth": {
"Google": {
"ClientId": "your-client-Id",
"ClientSecret": "your-client-secret"
}
},
"ReCaptcha:SiteKey": "your-site-key",
"ReCaptcha:Secret": "your-secret"
}
Edit it to contain the keys that you just obtained: