Using the Google Cloud Platform Console for App Engine
The Google Cloud Platform Console lets you manage your App Engine application along with other Google Cloud Platform resources. Use your Google account to access the Cloud Platform Console at https://console.cloud.google.com/.
Use the Cloud Platform Console to monitor application performance and control various serving and configuration settings, depending on your user role.
Note: To learn more about the many Google Cloud Platform features you can manage from the Cloud Platform Console, go to the Cloud Platform Console help center.
Contents
Managing a Cloud Platform project
Creating and deleting a project
Setting the server location
Enabling billing and setting a spending limit
Disabling billing
Managing billing
Managing permissions
Managing cookies, authentication, and logs retention
Using custom domains and SSL
Managing an App Engine application
Viewing snapshots of the current status
Viewing the status of running instances
Managing Versions
Viewing logs
Using security scans
Managing application resources
Viewing quota details
Managing task queues
Managing memcache
Managing datastore
Managing a Cloud Platform project
Creating and deleting a project
In the Google Cloud Platform Console, manage all projects.
To create a project, click Create Project, enter a name, and click Create.
To delete a project, click the trash can icon for the appropriate project. This will delete the project and all the resources used within the project.
For more information, see the Cloud Platform Console help page on Creating, deleting, and restoring projects.
Setting the server location
When you create your project, you can specify the location from which it will be served. In the new project dialog, click on the link to Show Advanced Options, and select a location from the pulldown menu:
us-central
us-east1
europe-west
If you select us-east1 your project will be served from a single region in South Carolina. The us-central and europe-west locations contain multiple regions in the United States and western Europe, respectively. Projects deployed to either us-central or europe-west may be served from any one of the regions they contain. If you want to colocate your App Engine instances with other single-region services, such as Google Compute Engine, you should select us-east1.
Note the following limitations:
You cannot change the location after the project has been created.
Python 2.5 support is deprecated and new Python 2.5 apps cannot be created in any location. Apps written in Python 2.7, and all other App Engine languages, can be served from any location.
The flexible environment is not currently available for projects located in europe-west.
Enabling billing and setting a spending limit
If your application needs more resources than the free quotas, you can enable billing to pay for the additional usage. If you have a billing account when you create a project, then billing will automatically be enabled. For more information, see Billing settings.
If you don't have a billing account, add one in the Cloud Platform Console.
Go to the Billing accounts.
Click New billing account and follow the on-screen instructions to set up a billing account.
Select a project, and enable billing for your project.
Go to the Project billing settings.
Click Enable billing. If billing is already enabled, then the billing account for the project is listed.
After you enable billing, there is no limit to the amount you may be charged until you set a daily spending limit. It's a good idea to specify a spending limit to gain more control over application costs.
Create or change the spending limit.
Go to the Application Settings.
Click Edit and specify a spending limit. Click Save.
The spending limit only applies to App Engine resources for the selected project:
You may still be charged for other Google Cloud Platform resources.
If you have multiple projects, you may want to set the spending limit for each project.
When you increase the daily spending limit, the new limit takes effect immediately. However, if you lower the spending limit, the change will take effect immediately only if the current daily usage is below the old limit. Otherwise, the new spending limit takes effect the next day.
For more information, see Spending Limits.
Disabling billing
Once you have enabled billing, if you want to stop automatic payments for a project, you must disable billing for the project. Alternatively, if you also want to release all the resources used in a project, you can shut down the project.
Managing billing
To manage a billing account, go to Billing accounts and select the account. You can then:
View your transaction history for the account and make a payment on the History tab.
Change your payment method on the Settings tab.
Add billing administrators on the Administrators tab.
Set up an alert for when monthly charges exceed a threshold on the Alerts tab.
Managing permissions
To manage permissions for both members and service accounts, go to the Permissions page. You can assign edit or view permissions to project members and service accounts. You can also designate members as project owners.
Assigning project member roles
Add project members and assign each member a role in the Permissions page. A role determines the level of access to a project and its application and resources. For App Engine applications, the role also controls the permissible actions in the gcloud and appcfg command line tools that are used to deploy and manage applications.
Role | Google Cloud Platform Console permissions | appcfg permissions |
---|---|---|
| View application information | Request logs |
| View application information and edit application settings | Upload/rollback application code, update indexes/queues/crons |
| All viewer and editor privileges, plus invite users, change user roles, and delete an application. | Upload/rollback application code, update indexes/queues/crons |
Managing service account permissions
If your project uses server-to-server interactions such as those between a web application and Google Cloud Storage, then you may need a service account, which is an account that belongs to your application instead of to an individual end user.
To add a service account or to edit service account permissions, go to the Permissions page.
Managing cookies, authentication, and logs retention
If you have owner permissions, in the Application Settings, you may set:
The Google login cookie expiration interval. Cookies can expire in one day, one week, or two weeks.
The Google authentication method. If you do not wish to authenticate against a Google Apps domain, you can switch to authentication using Google Accounts API.
The logs retention criteria (storage limit and time interval). By default, application logs are stored free of charge for up to 90 days and with a 1 GB maximum size limit. If you enable billing, you can increase the size limit and extend the time limit up to one year. The cost for log storage above the free quota is $0.026 per gigabyte per month.
Using custom domains and SSL
Specify a custom domain for your App Engine application using the domains page. Optionally, you can also specify an SSL certificate to be used with your custom domain via the certificates page.
For complete instructions, see Using Custom Domains and SSL.
Managing an App Engine application
Viewing snapshots of the current status
View usage graphs and current status tables for your app in the Dashboard.
Change the scope of the information displayed by selecting the modules and versions of your app using drop-down menus at the top of the dashboard. If you have enabled traffic splitting, the percentage of traffic routed to each version is indicated in parenthesis. Click on the link at the top right to view your app for the scope you've selected.
Usage graphs
View specific performance metrics or resources using the drop-down menu at the top left, below the module/version selectors. Select a time period (from 1 hour to 30 days) at the top right, then select one of the graph types:
Graph | Description |
---|---|
Error Details | The error count per-second broken down into client (4xx) and server (5xx) errors. This graph is similar to the error data shown in the Summary graph. |
Instances | A count of instances-per second. It breaks instance activity into the number of new instances created and the number of active instances (those that have served requests). |
Latency | The average number of milliseconds spent serving dynamic requests only. This includes processing time, but not the time it takes to deliver the request to the client. |
Loading Latency | The average number of milliseconds used to respond to the first request to a new instance. This includes the instance loading and initializing time as well as the time required to process the request. It does not include the time it takes to deliver the request to the client. |
Memory Usage | The total memory usage across all instances, in MB. |
Memcache Operations | The count-per-second of all memcache key operations. Note that each item in a batch request counts as one key operation. |
Memcache Compute Units | This graph approximates the resource cost of memcache. It is measured in compute units per second, which is computed as a function of the characteristics of a memcache operation (value size, read vs. mutation, etc.). |
Memcache Traffic | Measured in bytes-per-second. It is broken down into memcache bytes sent and received. |
Requests by Type | Static and dynamic requests per-second. |
Summary | The per-second count of requests and errors. Total Requests includes static, dynamic, and cached requests. Errors are broken down into client (4xx) and server (5xx) errors. |
Traffic | The number of bytes-per-second received and sent while handling all requests. |
Utilization | The total CPU activity in cycles-per-second. |
Current status tables
Below the dashboard graph, you can view tables with the current status of the module(s) and version(s) that you've specified at the top of the page:
Table | Description |
---|---|
Instances | Instances are grouped by the App Engine release number. For each release, the table shows the App Engine release that's running the module, the total number of instances, and the average QPS, latency, and memory. |
Billing Status | Displays the current usage and cost of billable resources. |
Current Load | Application activity is shown by URI. For each URI the table shows requests/minute, total request in the past 24 hours, and the number of runtime mcycles and average latency in the last hour. |
Server Errors | Reports the URIs that have generated the most errors in the past 24 hours. The total number of errors is shown, along with the percentage ratio of errors to total requests for the URI. |
Client Errors | Reports the URIs that have generated the most errors in the past 24 hours. The total number of errors is shown, along with the percentage ratio of errors to total requests for the URI. |
You can sort the tables on most of the columns. Place the mouse to the right of a column title and click on the caret that appears to sort by increasing or decreasing order.
Viewing the status of running instances
You can view information for every instance.
The top of the page is similar to the dashboard. Use the drop-down menus to select a specific module and version. You can see the same dashboard graphs on this page.
Below the graph, a table lists every instance in the scope you selected. If the selected module is running in a the flexible environment, a column indicates if its VM is being managed by Google or the user. The other columns in the table show:
The average Queries Per Second and latency (ms) over the last minute
The number of requests received in the last minute
The number of errors reported in the last minute
Current memory usage (MB)
The start time for the instance, indicating its age - how it's been running.
A link to the logs viewer for the instance
The instance's availability type; resident or dynamic.
Managing Versions
In the Cloud Platform Console, you can manage different versions of your app.
Use the drop-down menu to select a specific module. All of the module's deployed versions will appear in a table below, showing:
The version name
The percent of traffic routed to each version
The version size
The version runtime
The number of instances of each version
The deployment time
If you have owner or edit permissions, you can delete a version, or use traffic splitting and traffic migration to control the way requests are routed to versions of modules.
Note: The performance settings for modules are included in the module's configuration file. These settings are made at deployment time and cannot be changed from the Cloud Platform Console.
Traffic Splitting
Traffic splitting lets you specify a distribution (by percent) of traffic across two or more versions of a module. Traffic splitting is applied to URLs that do not explicitly target a version, like <your-project>.appspot.com
(which distributes traffic to versions of the default module) or <your-module>.<your-project>.appspot.com
(distributing traffic to versions of <your-module>
). This allows you to roll out features for your app slowly over a period of time. It can also be used for A/B Testing.
At the bottom of the Versions page is a Traffic Splitting section. Click Edit and select two or more versions, specifying the traffic percentage assigned to each. The sum of all percentages should be 100%.
To disable traffic splitting, select a single version from the version list and press Route all traffic.
When you have specified two or more versions for splitting, you must choose whether to split the traffic by IP address or an HTTP cookie. It's easier to set up an IP address split, but a cookie split is more precise.
IP Address Splitting
When the application receives a request, it hashes the IP address to a value between 0–999, and uses that number to route the request. IP address splitting has some significant limitations:
Because IP addresses are independently assigned to versions, the resulting traffic split will differ somewhat from what you specify. For example, if you ask for 5% of traffic to be delivered to an alternate version, the actual percent of traffic the the version might be between 3–7%. The more traffic your application receives, the more accurate the split will be.
IP addresses are reasonably sticky, but are not permanent. Users connecting from cell phones may have a shifting IP address throughout a single session. Similarly, a user on a laptop may be moving from home to a cafe to work, and hence also shifting through IP addresses. As a result, the user may have an inconsistent experience with your app as their IP address changes.
Requests sent to your app from outside of Google's cloud infrastructure will work normally. However, requests sent from your app to another app (or to itself) are not sent to a version because they all originate from the a small number of IP addresses. This applies to any traffic between apps within Google's cloud infrastructure. If you need to send requests between apps, use cookie splitting instead.
Cookie Splitting
The application looks in the HTTP request header for a cookie named GOOGAPPUID
that contains a value between 0–999. If the cookie exists the number is used to route the request. If there is no such cookie the request is routed randomly - and when the response is sent the app adds a GOOGAPPUID
cookie, with a random value between 0–999. (The cookie is added only when traffic split by cookie is enabled and the response does not already contain a GOOGAPPUID
cookie.)
Splitting traffic using cookies makes it easier to accurately assign of users to versions, which allows more precision in traffic routing (as small as 0.1%). Cookie splitting also has some limitations:
If you are writing a mobile app or running a desktop client, it needs to manage the
GOOGAPPUID
cookies. When your app server sends back a response with aSet-Cookie
header, you must store the cookie and include it with each subsequent request. (Browser-based apps already manage cookies in this way automatically.)Splitting internal requests requires extra work. Requests sent server-side from your app to another app (or to itself) can be sent to a version, but doing so requires that you forward the user's cookie with the request. Note that internal requests that don't originate from the user are not recommended for versions.
Caching and traffic splitting
Caching issues can exist for any App Engine application, especially when deploying a new version. Traffic splitting often makes subtle caching problems more apparent.
For example, assume you are splitting traffic between two versions, A and B, and some external cacheable resource (like a css file) changed between versions. Now assume the client makes a request, and the response contains an external reference to the cached file. The local HTTP cache will retrieve the file if it's in the cache - no matter which version of the file is cached and which version of the application served up the response. The cached resource could be incompatible with the data in the response.
Avoid caching problems for dynamic resources by setting the Cache-Control and Expires headers. These headers tell proxies that the resource is dynamic. It is best to set both headers, since not all proxy servers support the HTTP/1.1 Cache-Control
header properly. If you want more information on caching in general, try these web pages:
For cacheable static resources that vary between versions, you can change the resource's URL between versions. If each version of a static resource is serving from a different URL, the versions can happily co-exist in proxy servers and browser caches.
If the app sets the Vary: Cookie header, the uniqueness of a resource is computed by combining the cookies and the URL for the request. This approach increases the burden on cache servers. There are 1000 possible values of GOOGAPPUID
, and hence 1000 possible entries for each URL for your app. Depending on the load on the proxies between your users and your app, this may decrease the cache hit rate. Also note that for the 24 hours after adding a new batch of users to a version, they may still see cached resources. However, using Vary: Cookie
can make it easier to rename static resources that are changing between versions.
The Vary: Cookie
technique doesn't work in all circumstances. In general, if your app is using cookies for other purposes, you must consider how this affects the burden on proxy servers. If codeninja
had its own cookie that had 100 possible values, then the space of all possible cache entries becomes a very big number (100 * 1000 = 100,000). In the worst case, there is a unique cookie for every user. Two common examples of this are Google Analytics (__utma
) and SiteCatalyst (s_vi
). In these cases, every user gets a unique copy, which severely degrades cache performance and may also increase the billable instance hours consumed by your app.
Traffic migration
When 100% of traffic is routed to one version, you can use traffic migration to re-route requests to some other version. Select an available version in the versions list and press Migrate Traffic. Migration takes a short amount of time (possibly a few minutes), the exact interval depends on how much traffic your app is receiving and how many instances are running. Once the migration is complete, the new version receives 100% of the traffic.
Note: Traffic migration is only available between versions running in the sandbox enviroment. You cannot migrate to or from a version running in the flexible environment.
Warmup requests
When using traffic migration you must enable warmup requests on the target version (the version you are migrating to).
When a request from a user requires the creation of a new instance, the instance receives a loading request first, in order to initialize and load the application code. This can increase latency when handling the first user request. Warmup requests are sent to new instances before they receive user requests, which improves response time.
Warmup requests are already enabled by default in Java modules. For Python, Go, and PHP modules, you need to enable warmup requests by including this line in your app.yaml
file:
For more information, read about warmup requests.
Viewing logs
View the logs for all of a project's services, including App Engine. See How to read a log for an explanation of the log fields.
Using security scans
Identify security vulnerabilities in your Google App Engine web applications.
The Google Cloud Security Scanner crawls your application, following all links within the scope of your starting URLs, and attempts to exercise as many user inputs and event handlers as possible to discover vulnerabilities.
In order to use the security scanner, you must be an owner of the project. For more information, read the security scanner instructions.
Managing application resources
Viewing quota details
See the daily usage and quota consumption for your project in the Cloud Platform Console.
Resources are grouped by API. If some project resource exceeds 50% of its quota halfway through the day, it may exceed the quota before the day is over. For more information about quotas, see the App Engine quota page and Why is My App Over Quota?
You can also use the billing export feature to write daily Google Cloud Platform usage and cost estimates to a CSV or JSON file that's stored in a Google Cloud Storage bucket you specify.
Managing task queues
Use task queues so that a module that handles requests can pass off work to a background task that executes asynchronously.
In the Cloud Platform Console, you can see scheduled tasks, delete tasks, and manage a queue. This page displays information about both types of task queues, pull queues and push queues, and also tasks that belong to cron jobs. Select the type of task using the menu bar at the top of the page. When you are viewing push and pull queue reports, a quotas link appears at the upper right that hides and shows your app's use of task queue quotas.
A table lists all of your application's queues, filtered by the type you selected. Click on a queue name to show the tasks scheduled to run that queue. Click on a task name to see detailed information about the task's payload, headers, and previous run (if any).
You can manually delete individual tasks or purge every task from a queue. This is useful if a task cannot be completed successfully and is stuck waiting to be retried. You can also pause and resume a queue.
Managing memcache
Use a distributed in-memory data cache to improve performance by storing data, such as the results of common datastore queries. Memcache contains key/value pairs, and the actual pairs in memory at any time change as items are written and retrieved from the cache.
In the Cloud Platform Console, view the top keys, manage keys, and monitor memcache. The top of the page displays key information about the state of your project's memcache:
Memcache service level (shared or dedicated memcache). The shared level is free, and is provided on a "best effort" basis with no space guarantees. The dedicated class assigns resources exclusively to your application and is billed by the gigabyte-hour of cache space (it requires billing to be enabled). If you are an owner of the project, you can switch between the two service levels.
Hit ratio (as a percentage) and the raw number of memcache hits and misses.
Number of items in the cache.
The age of the oldest cached item. Note that the age of an item is reset every time it is used, either read or written.
Total cache size.
Top keys by MCU (for dedicated memcache). See Viewing top keys for more information.
Viewing top keys
For dedicated memcache, the memcache page displays a list of the 20 top keys by MCU over the past hour, which can be useful for identifying problematic "hot keys". The list is created by sampling API calls; only the most frequently accessed keys are tracked. Although the viewer displays 20 keys, more may have been tracked. The list gives each key's relative operation count as a percentage of all memcache traffic to the shard that stores that key (note that each key is stored in at most one shard). If an application is a heavy user of memcache and some keys are used very often, the display may include warning indicators.
For tips on distributing load across the keyspace, refer to the article Best Practices for App Engine Memcache.
Managing keys
You can add a new key to the cache and retrieve an existing key.
Monitoring memcache
Dedicated memcache is rated in operations per second per GB, where an operation is defined as an individual cache item access, e.g. a get
, set
, delete
, etc. The operation rate varies by item size approximately according to the following table. Exceeding these ratings may result in increased API latency or errors.
The following table provides the maximum number of sustained, exclusive get-hit
or set
operations per GB of cache. (Note that a get-hit
operation is a get
call that finds that there is a value stored with the specified key, and returns that value.)
|
|
---|
An app configured for multiple GB of cache can in theory achieve an aggregate operation rate computed as the number of GB multiplied by the per-GB rate. For example, an app configured for 5GB of cache could reach 50,000 memcache operations/sec on 1KB items. Achieving this level requires a good distribution of load across the memcache keyspace, as described in Best Practices for App Engine Memcache.
For each IO pattern, the limits listed above are for reads or writes. For simultaneous reads and writes, the limits are on a sliding scale. The more reads being performed, the fewer writes can be performed, and vice versa. Each of the following are example IOPs limits for simultaneous reads and writes of 1KB values per 1GB of cache:
Read IOPs | Write IOPs |
---|---|
10000 | 0 |
8000 | 1000 |
5000 | 2500 |
1000 | 4500 |
0 | 5000 |
Memcache compute units (MCU)
Note: The way that Memcache Compute Units (MCU) are computed is subject to change.
Memcache throughput can vary depending on the item size and operation. To help developers roughly associate a cost with operations and estimate the traffic capacity that they can expect from dedicated memcache, we define a unit called Memcache Compute Unit (MCU). Developers can expect 10,000 MCU per second per GB of dedicated memcache. The Google Cloud Platform Console shows how much MCU your app is currently using.
Note that MCU is a rough statistical estimation and also it's not a linear unit. Each cache operation that reads or writes a value has a corresponding MCU cost that depends on the size of the value. The MCU for a set
depends on the value size: it is 2 times the cost of a successful get-hit
operation.
Value item size (KB) | MCU cost for | MCU cost for |
---|---|---|
≤1 | 1.0 | 2.0 |
2 | 1.3 | 2.6 |
10 | 1.7 | 3.4 |
100 | 5.0 | 10.0 |
512 | 20.0 | 40.0 |
1024 | 50.0 | 100.0 |
Operations that do not read or write a value have a fixed MCU cost:
Operation | MCU |
---|---|
| 1.0 |
| 2.0 |
| 2.0 |
| 100.0 |
| 100.0 |
Note that a get-miss
operation is a get
that finds that there is no value stored with the specified key.
Managing datastore
Use a schemaless, scalable datastore for structured data objects. Manage your application's datastore in the Cloud Platform Console.
Datastore dashboard
View data for the entities in your application's Datastore, as well as statistics for the built-in and composite indexes in the Cloud Datastore Dashboard. The Statistics page displays data in various ways:
A pie chart that shows datastore space used by each property type (string, double, blob, etc.).
A pie chart showing datastore space by entity kind.
A table with the total space used by each property type. The "Metadata" property type represents space consumed by storing properties inside an entry that is not used by the properties directly. The "Datastore Stats" entity, if any, shows the space consumed by the statistics data itself in your datastore.
A table showing total size, average size, entry count, and the size of all entities, and the built-in and composite indexes.
By default, the pie charts display statistics for all entities. To restrict the pie charts to a particular entity kind, use the drop-down menu.
The statistics data is stored in your app's datastore. To make sure there's room for your app's data, Google App Engine will only store statistics if they consume less than 10 megabytes if namespaces are not used, or 20 megabytes if namespaces are used. If your app's stats go over the limit, kind-specific stats are not emitted. If subsequently these "non-kind" stats exceed the limits, then no stats are updated until stats storage drops below the size limitation. (Any previously reported statistics will remain.) You can determine whether this is happening by looking at the timestamps on stat records. If the timestamp gets older than 2 days, and stats in other applications are being updated regularly then this can indicate that you have run into stat storage size limits for your app.
The space consumed by the statistics data increases in proportion to the number of different entity kinds and property types used by your app. The more different entities and properties used by your app, the more likely you are to reach the stat storage limit. Also, if you use namespaces, remember that each namespace contains a complete copy of the stats for that namespace.
Note: Datastore operations generated by Datastore Statistics count against your application quota.
Indexes
In the Cloud Platform Console, you can create a new entity and view a table of all indexes.
Query
In the Cloud Platform Console, select an entity kind and construct a query by applying filters. You can also create a new entity in this page.
Backup/restore, copy, delete
To perform bulk operations on the entities in your datastore, go to the Datastore Admin page.
For more information, see Managing Datastore from the Google Cloud Platform Console.
Search
App Engine provides a Search service that stores structured documents in one or more indexes. A document contains data-typed fields.
If your app uses the Search service, you can search an index for documents with fields that match queries. You can also view all the indexes in a project and the documents contained in each index.