Does anyone know how to use Gevent, Gunicorn and Nginx in a flask project?
I'm trying but the asynchronous functions don't work!
A ContextMenu
is meant to close automatically when you select an option. If you want to keep it open after an item is clicked, you'll need to use a custom approach, such as a custom dialog or a BottomSheet
.
Today, you can find plenty of working MatDialog
examples here:
https://material.angular.io/components/dialog/examples
Note that you have several ways to pass values to the dialog:
MAT_DIALOG_DATA
this.dialogRef.componentRef.setInput(key, value)
You can remove the post and comment box in WordPress through several methods. For individual pages or posts, disable the comments in the Discussion settings while editing. If you want to apply this site-wide, go to Settings > Discussion and uncheck the option to allow comments on new articles. For a more technical approach, you can add filters to the functions.php file to disable comments entirely, like so:
add_filter('comments_open', '__return_false', 10, 2);
add_filter('comments_array', '__return_empty_array', 10, 2);
If you're managing events or other interactive content and need a streamlined solution, consider using a plugin like Simple WP Events, which offers easy-to-use options without unnecessary clutter.
I solved the problem but not the way I want.
I was directly finding definition id and assign it to the value. It resets value when I call SubmitChanges(). This is still not working.
But when I directy assign foreign table record it worked.
Not working
element.FKProgramCiktiID = ....
Working
element.FKProgramCikti = ....
shrinkResources to false solved the issue.
I've tried many different libraries but didn't found a perfect one.
react-native-math-view is abandoned, so you shouldn't go there
In JavaScript, class declarations represent a modern structure and are an optimized feature in terms of performance. Object declarations, on the other hand, are an older and more general structure.
After you download your laravel application, run the following command to have sail available within your project as seen in the official Laravel’s documentation:
docker run --rm
-u "$(id -u):$(id -g)"
-v "$(pwd):/var/www/html"
-w /var/www/html
laravelsail/php84-composer:latest
composer install --ignore-platform-reqs
I found a simplest way to solve this.This code will make your life easier!!.add this code in the beginning of your code.
import certifi
import urllib
os.environ["REQUESTS_CA_BUNDLE"] = certifi.where()
os.environ["SSL_CERT_FILE"] = certifi.where()
The issue posed is totally unrelated to the reason behind this behavior.
In the setup, we perform matrix multiplication for single-precision numbers on both the GPU and CPU. The test is performed by comparing the outputs of CPU and GPU. While this seems reasonable, both are different machines and hence will have a different computational accuracy because of floating point limitations. The difference scales with input matrix size because the floating point error difference between the machines accumulates.
Another thing that can be explain now is why reducing block size led to a test case passing. This is because for a reduced block size, the errors produced are within accepted bounds. But this does not still work for bigger inputs because the computation by a single thread requires it to scan an entire row and column, each of which scales with input size.
MAX_VALUE: This behavior is still not obvious. No computation goes beyond fp limit. So I am not sure how this can be explained.
For anyone coming here in 2025, here is what worked for me in Windows on 12/31/2024:
import pandas as pd
"vgdata=pd.read_csv("C:\Users\Ahmad\Documents\vgsale\vgsales.csv")"
for me the wallet connection is loss in trust wallet mobile app.It is working on desktop browser extension.In meta mask it is okey in both extension and mobile app.I am using web3 modal for connection.
If everything overlaps and doesn't fit when keyboard is open in landscape mode you could try encapsulating all of your views in a ScrollView. This way all of your EditTexts will stay where they were originally designed to be in relation to each other.
You could then use ScrollView.scrollTo() or ScrollView.scrollBy() methods to automatically scroll the page down to the related EditText (or let the user scroll themselves)
Using Meteor version 3.0.4 worked:
meteor create myapp --vue --release 3.0.4
Also using the latest beta:
meteor create myapp --vue --release 3.1.1-beta.2
I think Codemap (https://codemap.app) is a great tool for visualizing the function/class/file level call graph of a PHP codebase. It currently only works on Linux and Mac, but not Windows.
(disclaimer: I created Codemap, which supports many programming languages, PHP included)
Latest release of ibm_db package has fixed this issue and you can install it simply using npm install [email protected]
command. Now, ibm_db works with arm64 version of nodejs on MacOS M1 Chip system. Thanks.
If you're using react-native expo you can run the command
npx expo install the-expo-package
This will fix almost every issue with the package version. This is because expo will automatically determine the suitable version.
=IF(DAY(B2)>10, MONTH(B2), MONTH(B2)-1)
If date is in cell B2.Try it.
It will be Returns empty string instead of default message
add_filter( 'woocommerce_checkout_login_message', 'remove_guest_checkout_message' );
function remove_guest_checkout_message() {
return '';
}
When working with updateinvoicelines(), ensure you minimize the duration between loading and saving the record. Fetch the record, update only the necessary fields, and save it immediately. Add error-handling logic to retry saving the record if it fails due to a "Record has been changed" error. This can be done with a try-catch block and a retry mechanism.
on first invoke when lambda pulls new image or creates container, if result of first invoke is error (if i intentionally send bad request params) slack transport will send error only when that container is killed (aprox. 5mins after there are no new requests), if i try new requests slack transport will send them to slack
This is probably related to the async error handling of your Slack transport. If the async execution is not handled properly, the lambda will freeze before the error is sent to the webhook. The reason why it's sent later on, it's because of the lifecycle:
SIGSTOP
signal)SIGCONT
, because you can't stop a frozen process) - that would explain the execution you had after 5 minutesSIGTERM
)SIGTERM
is failing, it will force a stop with a SIGKILL
signalwinston cloudwatch transport will not log any error to cloudwatch during container lifetime but will send them all after container is killed when lambda decide to do that
The issue seems to be similar to the previous one. In that case the project is using callback based functions, and I guess your code is using promises with async/await. You should be able to make it work correctly by using promisify util like this example:
const winstonCloudWatch = new WinstonCloudWatch({
name: 'using-kthxbye',
logGroupName: 'testing',
logStreamName: 'another',
awsRegion: 'us-east-1',
awsAccessKeyId: await getSecretValue('AWS_ACCESS_KEY_ID'),
awsSecretKey: await getSecretValue('AWS_SECRET_KEY'),
});
kthxbyeAsync = promisify(winstonCloudWatch.kthxbye).bind(winstonCloudWatch);
winston.add(winstonCloudWatch);
winston.add(new winston.transports.Console({
level: 'info'
}));
To download and install CCleaner using PowerShell you can follow these steps:
Step 1: Open PowerShell as Administrator Press Win + S, type PowerShell, and right-click Windows PowerShell. Select Run as Administrator. Step 2: Run the Following Commands Step 2.1: Download CCleaner Installer Run this command to download the CCleaner installer directly from its official website:
powershell Copy code Invoke-WebRequest -Uri "https://download.ccleaner.com/ccsetup604.exe" -OutFile "$env:TEMP\ccsetup604.exe" This downloads the installer file to your temporary folder.
Step 2.2: Install CCleaner Silently To install CCleaner without user interaction, run:
powershell Copy code Start-Process -FilePath "$env:TEMP\ccsetup604.exe" -ArgumentList "/S" -Wait /S: Ensures a silent installation (no prompts). -Wait: Waits for the installation to complete. Step 3: Verify Installation After installation:
Check if CCleaner is installed by searching in the Start menu. Alternatively, check its folder: C:\Program Files\CCleaner. Optional: Clean Up Installer After installation, you can delete the installer file:
powershell Copy code Remove-Item "$env:TEMP\ccsetup604.exe" This script ensures a quick and automated installation of CCleaner using PowerShell.
open IntelliJ settings and select Editor | Code Style | Java | Code Generation.
Select the Use external annotations checkbox.
and the annotation option will be enabled.
Intellij version used - 2024.2.3
If you want to use the seccomp profile in your pod without configuring the profile json in /var/lib/kubelet/seccomp/profiles you may use RuntimeDefault
seccomp profile.
As per this Official Kubernetes document :
Most container runtimes provide a sane set of default syscalls that are allowed or not. You can adopt these defaults for your workload by setting the seccomp type in the security context of a pod or container to
RuntimeDefault
.Note: If you have the
seccompDefault
configuration enabled, then Pods use theRuntimeDefault
seccomp profile whenever no other seccomp profile is specified. Otherwise, the default isUnconfined
.Here's a manifest for a Pod that requests the
RuntimeDefault
seccomp profile for all its containers:apiVersion: v1 kind: Pod metadata: name: default-pod labels: app: default-pod spec: securityContext: seccompProfile: type: RuntimeDefault containers: - name: test-container image: hashicorp/http-echo:1.0 args: - "-text=just made some more syscalls!" securityContext: allowPrivilegeEscalation: false
Refer to this document and also check this blog for more information which might be helpful for you.
What do you mean by 3. Run the local http server from the folder dist. open the web application browse the it manually. Can you please elaborate on this?
You can try this https://github.com/macropay-solutions/laravel-crud-wizard-free also. It comes with setup for each crud action and updatable columns, filters and many more features.
if not found audio-visualizer
use
implementation "github.gauravk95:audio-visualizer-android:master"
this dependency uses many visualization for audio like circle, bar, line etc...
more detail and example Click
Please provide more details or error messages that you are getting for npm run build
Run the following command to check the path to your Python 3 installation:
$ which python3
This will show the full path of your Python 3 binary. On most systems, it should be /usr/bin/python3 or /opt/homebrew/bin/python3
cat ~/.zshrc
add line if not available : alias python=/correct-path-to-python3
example : alias python=/usr/bin/python3
source ~/.zshrc or restart terminal
python --version
The error because you parsing the key choices which doesn't exist. Check on this line :
String reply = jsonResponse.getJSONArray("choices");
According to this link porxy_pass
is not allowed in server block(Context allowed: location, if in location, limit_except
. so you must add it to configuration-snippet
and you must make sure about nginx configuration validation.
I fixed these unresolved external symbols by removing the /ENTRY:DllMain argument from the Makefile.
Did you manage to fix it i also have the same issue?
Try to run this command:
flutter pub get
flutter pub run build_runner build --delete-conflicting-outputs
added below annotation for my GlobalExceptionHandler class and then run the mvn command : mvn clean install and run the application, later I am able to view http://localhost:----/swagger-ui/index.html#/
@Hidden @ControllerAdvice
Same issue I face,May I know, how to create and grant application access policy?
The problem is coming because of the expired jwt token, which happened because of wrong system timings. once it was fixed the token is not expiring, and the problem was solved
The error is due to pyspark because it is not able to deal with column names with special characters. So either replace the column names from the array or use col(f"{column_name}
").
https://www.mungingdata.com/pyspark/avoid-dots-periods-column-names/
Your question is not clear but if you are looking for whether we can implement linked list in laravel? The answer is Yes, we can.
As a developer if you ask me, I will prefer arrays over linked list
All who tried to show off their coding skills,should put their brains in gear. Ping google.com, Or ping 8.8.8.8 download is fast and continuous download that does not allow any comand to be typed in stopping the continuous download. The Remedy is to manually reboot the device.
As for me, I'd like to know how to remove left and right paddings. I tried all solutions suggested here but it didn't work.
I have check the ARM64 VM architecture, seems no size support Nested Virtualization
which have the suffix of pdf, pls, or plds, etc
D2pds_v5 | Ampere Altra [Arm64] | Nested Virtualization: Not supported
D2pds_v6 | Azure Cobalt 100 [Arm64] | Nested Virtualization: Not supported
D2pls_v5 | Ampere Altra [Arm64] | Nested Virtualization: Not supported
D2pls_v6 | Azure Cobalt 100 [Arm64] | Nested Virtualization: Not supported
D2ps_v5 | Ampere Altra [Arm64] | Nested Virtualization: Not supported
D2ps_v6 | Azure Cobalt 100 [Arm64] | Nested Virtualization: Not supported
D2plds_v5 | Ampere Altra [Arm64] | Nested Virtualization: Not supported
D2plds_v6 | Azure Cobalt 100 [Arm64] | Nested Virtualization: Not supported
You can check details from this links
Further more, someone also confirm it, links
Update package.json -> devDependencies -> @tauri-apps/cli to version ^2.1.0
Hyparquet is a tiny and well-support parquet parser for the browser. It is written in pure javascript so it works well with webpack. I confirmed this works with webpack5 default config:
hyparquet-webpack-demo.js
import { asyncBufferFromUrl, parquetRead } from 'hyparquet'
// Load parquet data from a url using hyparquet
const url = 'https://hyperparam-public.s3.amazonaws.com/bunnies.parquet'
async function main() {
const file = await asyncBufferFromUrl({ url })
await parquetRead({
file,
onComplete: (data) => console.log(data),
rowFormat: 'object',
})
}
main()
webpack.config.js
module.exports = {
mode: 'development',
entry: './hyparquet-webpack-demo.js',
}
// output: ./dist/main.js
Using act
, while not always necessary is good practice when updating state or testing rendered components, since it "makes your test run closer to how React works in the browser" (docs) by making sure that rendering happens before any assertions.
I am running python through spyder and anaconda. I am receiving similar issue. Could anyone please tell me how you could sort it out? Thx
This problem is related to a bug in the rc-table library that the ant design uses it.
in 7.50.1 version Problem solved.
in package.json of your project add this property and run npm install again:
"overrides": { "antd": { "rc-table": "7.50.1" } }
The issue lies in the LANGUAGE SQL declaration in your function definition, it's working fine - please check :
CREATE OR REPLACE FUNCTION add(integer, integer) RETURNS integer LANGUAGE 'sql' AS 'select $1 + $2;' RETURNS NULL ON NULL INPUT COST 100;
In the latest Ionic 8, the following CSS can be used to change the background color of a disabled Fab button:
ion-fab-button.fab-button-disabled::part(native)
{
background: yellow;
}
in the name of god
Hello i think if you use
overflow: hidden;
in the parent div it's become okay.
You can check here and need to perform changes to gradle file.
https://stackoverflow.com/a/78703060/4373661
Hope this will help you.
The issue here is that this is Firebase Google Services dependency, so we need to add the Google Services dependencies first:
implementation("com.google.android.gms:play-services-vision:20.0.0")
The only way I figure is writing a <.m> file that builds all the comments. Something like: generator.m --> pub.m --> html (origin) (file to Publish) (Published web page)
My code example,<myOwnSolution.m> is:
% My own solution
clc, clear, close all
syms s
fprintf('I want to Publish a LaTeX Variable, randomly built.\n');
fprintf('Let''s say:\n');
a = randi([2,7]);
b = randi([2,7]);
P = [1,a,b]; % Coeffs are randomly selected
symPol = poly2sym(P,s);
disp('A pretty view of P(s), in the Workspace:')
pretty(symPol);
fprintf('\n\n')
fileID = fopen('testPublishLaTeX.m','w');
fprintf(fileID,'%%%% Publish a LaTeX variable\n');
fprintf(fileID,'%% This is a Publish file in Matlab\n');
fprintf(fileID,'%%%% First Section \n');
fprintf(fileID,'%% Publish a LaTeX variable in a text line: $P(s)=%s$\n',latex(symPol));
fprintf(fileID,'%%%% Second Section\n');
fprintf(fileID,'%% Publish a LaTeX variable and expression in a ordered list:\n');
fprintf(fileID,'%%\n');
fprintf(fileID,'%% # A LaTeX variable: $P(s)=%s$\n',latex(symPol));
fprintf(fileID,'%% # A LaTeX expression: $F(s)=\\alpha^2$\n');
fprintf(fileID,'%%%% Third Section\n');
fprintf(fileID,'%% Publish a LaTeX variable and expression in a unordered list:\n');
fprintf(fileID,'%%\n');
fprintf(fileID,'%% * A LaTeX variable: $P(s)=%s$\n',latex(symPol));
fprintf(fileID,'%% * A LaTeX expression: $F(s)=\\alpha^2$\n');
fprintf(fileID,'%%%% Fourth Section\n');
fprintf(fileID,'%% Publish a LaTeX variable and expression as lone expressions:\n');
fprintf(fileID,'%%\n');
fprintf(fileID,'%% $$P(s)=%s$$\n',latex(symPol));
fprintf(fileID,'%%\n');
fprintf(fileID,'%% $$F(s)=\\alpha^2$$\n');
fprintf(fileID,'%%%% Fifth Section\n');
fprintf(fileID,'%% Publish a LaTeX variable with (not in) HTML format:\n');
fprintf(fileID,'%%\n');
fprintf(fileID,'%% <html>\n');
fprintf(fileID,'%% <div style="font-family:Georgia, serif; font-size:large;">\n');
fprintf(fileID,'%% <p>This is my polynomial over s:</p>\n');
fprintf(fileID,'%% </div>\n');
fprintf(fileID,'%% </html>\n');
fprintf(fileID,'%%\n');
fprintf(fileID,'%% $$P(s)=%s$$\n',latex(symPol));
fprintf(fileID,'%%\n');
fclose(fileID);
myDoc = publish("testPublishLaTeX.m","html");
Now, I can view in the browser the file <testPublishLaTeX.html>, just created whith the 'publish' command.
JPQL does not support the SELECT *
syntax or raw table names. It operates on entity classes and their fields, not on raw database tables.
Here is how u can fix it:-
private static final String FIND_USER_BY_NAME = "SELECT u FROM User u WHERE u.userName = :userName";
Here is what I am getting out of this:
The tower’s HumanoidRootPart is probably still moving, but the rest of the tower might not be. If the tower is a model, you can use :MoveTo() on it to move the whole thing. Example code:
mouse.Move:Connect(function()
print("moved")
tower:MoveTo(mouse.Hit)
print(towerHum.CFrame)
print(mouse.Hit)
end)
I am writing this on a school iPad, so I am not able to test this or provide documentation links (they blocked roblox.com), but please tell me if it works!
You can do this from the Databricks Account console only if you've the account admin
role.
Go to https://accounts.azuredatabricks.net/
and login.
Then go to the User management
section on the left pane.
Search for the user, click on it and change it to your desired name.
I have solved this problem after a lot of research. you need proto file to implement text input esp these parameters: imeObject, editInfo, imeBatchEdit
this works in the forward-direction but not in the backward-direction. That is possible in Visual-Studio.
kind Regards Matthias Lakämper
Yes, it's possible to create a VPN app using Flutter and Dart, but it involves integrating native platform code since Flutter itself does not provide a direct way to interact with low-level VPN APIs. Here's how you can approach it: Steps to Build a VPN App in Flutter: Understand VPN Requirements: VPNs require access to device-level network configurations, which typically involves using native platform APIs like: Android: VpnService iOS: NEVPNManager Flutter and Native Code Integration: Use platform channels to communicate between Dart and native code (Kotlin/Java for Android, Swift/Objective-C for iOS). The VPN functionality (like configuring servers, protocols, and connections) will be implemented natively, while the Flutter UI will handle user interaction. Use Third-Party Libraries: Android: Use libraries like strongSwan or OpenVPN for Android. iOS: Use Apple's Network Extension framework. Consider OpenVPN or WireGuard SDKs for both platforms. Flutter Plugins: You can either create a custom plugin for your app or check for existing plugins. For example: flutter_vpn: A plugin for VPN functionality, though it might require modifications or additional work for your use case. Backend Server for VPN: A VPN app typically requires a server-side component to manage VPN connections. Set up a VPN server using tools like: OpenVPN WireGuard Shadowsocks Use APIs to interact with the server and manage user accounts, subscriptions, etc. Implement Features: UI: Create an intuitive interface for connecting to VPN, selecting servers, etc. Authentication: Add user authentication and subscription features. Protocols: Support popular VPN protocols like OpenVPN, IKEv2/IPsec, or WireGuard. Security: Ensure encryption and secure handling of user data. Testing: Test thoroughly on both Android and iOS devices for connection stability, speed, and security. Compliance and Permissions: Obtain the necessary permissions for VPN access. Comply with app store guidelines (e.g., App Store and Google Play policies for VPN apps). Challenges You May Face: Working with platform-specific VPN APIs. Setting up a secure and reliable VPN server. Meeting app store requirements for VPN apps. Handling user privacy and data securely.
Presently I am using .Net 9 and it's not working. Even the sample app upgraded .Net 9 is not showing any ads.
Its the method responsible for iterating the services which is setup in your config files. Each of those properties ie invokables, factories etc correspond to the keys in the configuration arrays. This is where they get consumed and attached to the service manager instance.
i used ^XA^CI28 and roboto font in my printer, but with characters: "y","g","ғ","Â" the characters are shifted up and down not aligned
From the error log, it might be caused by the class PrefetchingStatistics
missed.
You can refer this answer, which is similar with your problem.
In hindsight, this is a very silly question by me.
I looked back and realized the issue was in the :root styling within my index.css file. By default, a new Vite app will have a background-color set and removing only that line or changing it into 'transparent' will not solve the issue. The way I solved it was by entirely removing the styles for the :root class in index.css and any other files.
Thanks to Phil for his comment that made me go back and look at the test files again!
Field.MarshalJSON
calls Encoder.Encode
to marshal the Field
value. Encoder.Encode
calls Field.MarshalJSON
to marshal the Field
. This repeats until the stack space is exceeded.
Break the recursion by declaring a new type without the MarshalJSON
method. Convert the Field
value to that new type and encode that value:
type x Field
if err := encoder.Encode(x(f)); err != nil {
return nil, err
}
Security roles can be either additive or restrictive, depending on the specific needs of the system and the security model being implemented. However, the choice between these two approaches should be informed by the principles of least privilege and defense in depth. Here’s a breakdown of the two approaches and when they might be used: Additive Approach What It Means: Users start with no access or minimal access, and security roles explicitly grant additional permissions or functionality. When to Use: When you want to ensure strict control over what users can do, minimizing the risk of accidental over-privilege. Systems are designed with a "deny-by-default" philosophy, where permissions are only granted as needed. This is for environments where compliance or sensitive data requires the highest security (e.g., financial and healthcare systems). Advantages: Easier to align with the principle of least privilege, reducing potential attack surfaces. Permissions are explicit and intentional, making it easier to audit and understand. Disadvantages: It can become cumbersome to manage if there are many roles and users with overlapping permissions. Restrictive (Negating) Approach What It Means: Users have a baseline set of permissions or functionality, and security roles restrict or negate specific access. When to Use: Most users require similar baseline functionality, and only a subset needs restricted access (e.g., in public or shared systems). When legacy systems or broad access policies make it challenging to implement a purely additive model. Advantages: Simpler to manage systems with broad default access requirements. Easier to adapt to changes if the baseline access doesn’t frequently change. Disadvantages: This can lead to over-provisioning of permissions if restrictions aren’t carefully defined or enforced. Harder to enforce the least privilege, as the baseline may grant unnecessary permissions. Combination Approach In many cases, a hybrid approach is used: Baseline roles provide minimal or broad permissions needed for general functionality. Additive roles grant specific permissions for specialized tasks or features. Restrictive roles can negate permissions in sensitive areas for specific groups or users. Best Practices Default Deny, Explicit Allow: The safest approach is to start with no permissions and explicitly grant access as needed. Granularity: Use fine-grained permissions to control access to specific features or data. Auditing and Monitoring: Regularly review roles and permissions to ensure they align with business needs and security policies. Role Hierarchy: Consider hierarchical roles where higher-level roles inherit permissions from lower-level ones for ease of management. In general, additive roles align better with modern security best practices, as they provide greater control and reduce the risk of unintended permissions. However, restrictive roles can complement them in certain scenarios, especially in legacy systems or complex environments.
in app folder ,find sencha.cfg ,change the app.framework.version=(you have or lattest)
i have the same issue and as a solution upgrading the gradle version in the project works as well. but every time i create a new flutter project i have to repeat this process. the default gradle version is always set to 8.3 . and i dont know how to change it.
Personal Statement
I am a Civil Engineering graduate with a strong academic background and practical experience in managing large-scale infrastructure projects. My undergraduate education provided me with a deep understanding of the technical and analytical aspects of civil engineering, but it also sparked a growing interest in the managerial side of the construction and engineering sectors. As I worked through internships and project management roles, I increasingly realized the importance of leadership, strategic decision-making, and business acumen in driving successful projects and organizations. This realization has motivated me to pursue an MBA at EMA Paris, where I believe I can develop the essential skills to become a well-rounded leader in the engineering and construction industry.
The decision to pursue an MBA is driven by my desire to bridge the gap between technical expertise and management proficiency. While engineering provides the foundation for problem-solving, an MBA will offer the strategic framework necessary to manage teams, streamline processes, and effectively navigate complex business environments. The dynamic nature of the global construction industry, combined with increasing demands for innovation and sustainability, has highlighted the need for professionals who not only understand the technical aspects of engineering but can also manage projects, lead organizations, and drive business growth.
EMA Paris stands out as the ideal institution for my MBA aspirations due to its reputation for fostering global perspectives, innovation, and a deep understanding of business practices. The school’s emphasis on practical, hands-on learning, combined with its diverse international cohort, offers an enriching environment where I can learn from both professors and peers. Additionally, EMA Paris’s strategic location in one of Europe’s business hubs provides invaluable networking opportunities with leading industry professionals, which I believe will be crucial for my career development.
Upon completing the MBA program at EMA Paris, I envision myself taking on a leadership role within the construction or infrastructure sector, where I can drive sustainable, innovative, and efficient solutions to meet the challenges of modern society. With an MBA, I aim to move beyond the technical constraints of engineering and take on responsibilities that involve managing large-scale projects, shaping corporate strategy, and leading multidisciplinary teams. Ultimately, I aspire to contribute to the growth of an organization while shaping the future of urban development and infrastructure globally.
I found that I click the first button on touch bar all the time to start debugging, but this button means run without debug. I should click the second button, that's a stupid mistake :(
Well, the way I am reading this code is slightly different than starriet suggests. It appears that
dotenv_path = os.path.join(os.path.dirname(file), ".env") load_dotenv(dotenv_path)
reads the .env file and sets/ clobbers key value pairs into environment variables in the shell that the python script is ALREADY executing in. Then when that shell dies, it takes the loaded OS environment variables with it to the grave. So technically, starriet is correct, the .env settings do not affect the shell you're actually in or other shells on the system, but nevertheless they are actually being set in the environment that python script is running in. That may or may not make a difference depending on the python script.
So the issue is I was building the container on an M mac which runs on ARM and defaults to building containers that are optimized for ARM where as Google Cloud Run runs container on linux/amd64.
You need to specify the platform you want to build them on:
Build your container using this command:
docker buildx build -t flask_backend:latest --platform linux/amd64 .
Rather than using the .update method, try using .update_cell in order to prevent interpretation of the formula as a string:
def update_formulas(sheet, data):
for idx in range(2, len(data) + 2):
formula = f"=B{idx}*C{idx}"
sheet.update_cell(idx, 4, formula)
This CLI way can't do (at least yet AFAIK)
So get 'real' MS Visual Studio IDE version Community VS2022 whose executable binary file name devenv
Launched, its wizard worked like a charm
It'll right off upgrade/update, migrate the old VS version project to the newest version one
One could compare the files .vcxproj
and .vcproj
remaining as it's
Glad you found your error.
One method I always use is to set alpha to a very small value. For a tiny alpha, it should converge. If it doesn't, you have an error in your code.
Draw a graph. if it's not plateauing, check your code. Make sure you subtract the alpha * gradient, not add it - common mistake.
I am new to C++ but upon testing, C++ seems to cut off digits after the 6th in std::cout by default. To show a specific amount of digit after the decimal point, setprecision seems to do the trick. You have to import the iomanip library to use it.
#include <iostream>
#include <iomanip> // library required for setprecision
using namespace std;
int main() {
double r = 50.5;
double z = 2550.25128788;
cout << fixed << setprecision(10) << r * z << "\n";
return 0; //Output: 128787.6900379400
}
fixed forces fixed-point notation instead of something like an exponential answer.
setprecision(n) specifies n decimal places for the output.
Ckeck the urls.py and the request URL. It seems to give 403 on wrong request URL.
Starting with MAMP Pro 7.1 mysqldump moved to this location:
/Applications/MAMP/Library/bin/mysql80/bin/mysqldump
Call it explicitly:
/Applications/MAMP/Library/bin/mysql80/bin/mysqldump --host=localhost -uroot -proot db_name > /path/to/db_name_backup.sql
Same here man...did it get fixed? If so, how did you fix it? Thanks for the help
const {email} = req.body // returns "[email protected]"
const user = await User.findOne({email})
// Try writing it like this.
const userEmail = req.body // returns "[email protected]"
const user = await User.findOne({email:userEmail})
//consdering there is a email column named email in db
QSocketNotifier: Can only be used with threads started with QThread Segmentation fault (core dumped)
I get this using Actiona and Ubuntu 22.04 This distro using Snap application.
I would call it a non-deterministic issue. I hate code execution mysteries. Probably some data driven edge case that your dependencies are hitting. Hard to repro, hard to find.
The above works perfectly when searching notes on a worksheet. If searching a range & you want to know if there is no comment in a particular cell, try using NoteText instead. dim a as string a = Worksheets("Sheet1").Range("A1").NoteText
I cannot believe this answer is not on stack overflow. After months of trying and giving up, I finally saw someone on GitHub say to use sudo. I used sudo and finally worked. can't believe it was as simple as adding sudo on such a high headache problem that I couldn't find the solution to.
sudo npx expo start --tunnel
I had this exact issue on my linux PC with zsh. Adding the following into ~/.zshrc
resolved the issue:
export SSH_AUTH_SOCK="$XDG_RUNTIME_DIR/ssh-agent.socket"
if ! pgrep -u "$USER" ssh-agent > /dev/null; then
eval "$(ssh-agent -s)"
fi
ssh-add -q ~/.ssh/id_personal
The root cause is that your lines
$a
and
$b
generate uncaptured output in the top-level script context and so result in an implicit call to Out-Default
to display the output on the console.
This is then passing the whole output from both lines into a single call to Format-Table
which has a quirk that it waits for 300ms for more data to arrive before it decides which columns to display. It looks like in that 300ms only the data from $a
is received, so it's locking the columns down to Name
and Group
. When the output from $b
is received it doesn't automatically add the GroupMembership
column.
@Santiago Squarzon's answer works around this by aligning the property names in $a
and $b
so the columns determined by Format-Table
are consistent across all of the output.
Another option is to explicitly pipe the individual variables into Format-Table
like this:
$a | format-table
...
$b | format-table
which will render two separate tables with their own columns calculated based on input to each separate call to format-table
, and will result in this on the console:
Name Group
---- -----
D2\\[email protected] {ADMINS, WebService}
D2\\[email protected] WebService
D2\\[email protected] WebService
D2\\[email protected] ADMINS
D2\\[email protected] WebService
Name GroupMembership
---- ---------------
D2\\[email protected] {ADMINS, WebService}
D2\\[email protected] WebService
D2\\[email protected] WebService
D2\\[email protected] WebService
See these links for more gory technical details:
Same issue,
Disabling works, thoughts?
# Import and Disable Default Repo
data "azuredevops_git_repository" "lab_001_default" {
project_id = azuredevops_project.lab_001.id
name = azuredevops_project.lab_001.name
}
resource "azuredevops_git_repository" "lab_001_default" {
project_id = azuredevops_project.lab_001.id
name = azuredevops_project.lab_001.name
disabled = true
initialization {
# I assume the default is Uninitialized, but this is ignore_changes so I
# dont think we should care.
init_type = "Uninitialized"
}
lifecycle {
ignore_changes = [
# Ignore changes to initialization to support importing existing repositories
# Given that a repo now exists, either imported into terraform state or created by terraform,
# we don't care for the configuration of initialization against the existing resource
initialization,
]
}
}
import {
id = join("/", [
data.azuredevops_git_repository.lab_001_default.project_id,
data.azuredevops_git_repository.lab_001_default.id
])
to = azuredevops_git_repository.lab_001_default
}
For anyone on a Mac you will need to do the following.
sudo npm cache clean -f
npm update
npm update -g @vue/cli
sudo vue create app-name
Apparently Vue only likes sudo commands for mac and linux.
It does not appear to be a syntax or linting warning, nor does it resemble any typical highlight associated with code cells or markdown.
For me, it does, the sections written in color show there is a code cell with a warning or an error (I also use Pylance).
For example, here I don't have error or warning :
With a warning, I have an orange text and an orange circle :
With an error, I have a red text and a red circle :
make any headway on this? I'm curious as well.
As of December 31, 2024, if you're following older Spring tutorials, you may run into this issue:
In the past, when you selected the "gateway" dependency in Spring Initializr, the artifact included was spring-cloud-starter-gateway-mvc. This worked for some older tutorials. However, this will not work now if the tutorial expects the reactive gateway.
If things start to fail and you're wondering why—this is likely the issue!
The correct artifact is spring-cloud-starter-gateway, which comes when you select "Reactive Gateway" in Spring Initializr.
Because of the noise in the experimental data, I thought it would be easier to work with np.interp() to interpolate the data:
x = np.linspace(0, 32, 100)
interCurve = np.interp(x, bias_voltage, dark_current)
derivB = np.gradient(interCurve[:-1], x[:-1])
plt.plot(x, interCurve, label='interpolated curve')
plt.scatter(bias_voltage, dark_current, marker='x', color='g', s=6, label='experimental points')
plt.plot(x[:-1], derivB, label='derivative of interpolated curve')
plt.legend()
plt.show()
peerdb works with non-hosted clickhouse instances. In fact, our CI just runs stock clickhouse on CI:
& then e2e peer setup: https://github.com/PeerDB-io/peerdb/blob/60e80b822ec284224ccb87ee008a33201d42c85d/flow/e2e/clickhouse/clickhouse.go#L67
peerdb docker-compose files include minio to serve as s3 staging, if you're running peerdb outside of that environment you'll need to configure an s3 bucket for clickhouse
It can be awkward to connect to localhost if you're running peerdb inside docker & postgres outside docker. Would have to know more about your setup to help further
WP and Woo w/ HPOS are recent versions. Running PHP 7.4
(1) Do you have an answer for why WP/WOO's maybe_serialize doesn't start with the a: ... serialized data as described, above? Instead, it's 2 sets of serialized data, not one.
I used maybe_serialize([array here]) and the actual data in the database start with
s:214"
and ends with
";
The actual serialized data are in between. (Note: The "214" depends on the size of the array keys and values).
If I used PHP's serialize command before sending it to the database, actual serialized data are stored as you described without the starting s:214" and ending ";
Why is that?
In an external program needing the data, if I send the serialized data through PHP's unserialize, it won't unserialize. (Try it at unserialize.com). It has to be done a second time (taking up unnecessary resources and knowledge that it's double-serialized. Future programmers may not be aware of that.).
(2) In the above example, the serialized data are $order->update_meta_data('item_shipping_data', $data_serialized);
QUESTION Do I really need to serialize or maybe_serialize the data before running $order->update_meta_data()?
QUESTION for reading the data - does WP/Woo automatically unserialize it using $order->get_meta('meta_key_here');
(3) One step further, using PHP's serialize, In WP/Woo how would I add $mysqli->real_escape_string() to cleanse the serialized data for the database in WP/Woo to avoid the double serializing? This question is for other places we may need to store serialized data other than $order->update_meta_data().
Thank you for your thoughtful answers!
The solution is perfect, THANK YOU! Tested with TYPO3 13
Check that your build variant is set to debug and not release. In Android Studio go to the Build menu > Select Build Variant > in the Build Variants window set the 'Active Build Variant' for module ':app' to Debug.
If you have it set to Release it is likely not working because your build.gradle file has the 'debuggable' attribute set to false.
src/codejam2011/round1c/B.in Gregory D Dudley
As already mentionned in this topic, a process that runs with PID 1 in its own pid namespace inherits a specific behaviour on how to deal with SIGINT
and SIGTERM
which is to ignore them.
This is precisely what happens when running a docker container, but not limited to it.
For example, run this command in a shell as root :
# unshare --pid --fork --mount-proc sleep infinity
This runs a sleep infinity
command in its own pid namespace. You can verify it running the lsns
command in another shell.
# lsns
NS TYPE NPROCS PID USER COMMAND
4026532363 pid 1 292 root sleep infinity
If you tries to send a SIGINT
to this process (with Ctrl+C
in the first shell, or with the kill -s SIGINT <PID>
command in the second shell), it will has no effect.
If you want to get rid of this process, you have to hard-kill it with kill -s SIGKILL <PID>
command in the second shell.
You can check that this process was running with PID 1 in its pid namespace running the ps
command the same way.
# unshare --pid --fork --mount-proc ps
PID TTY TIME CMD
1 pts/0 00:00:00 ps
Essentially you can observe the same.
# docker run -d --rm --name ubuntu ubuntu sleep infinity
d13fc1da3609407332c511f68d5b0513b31fa55df2e9b545044f53bfd0b2dc4b
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d13fc1da3609 docker.io/library/ubuntu:latest sleep infinity 2 seconds ago Up 3 seconds ubuntu
# lsns
4026532384 pid 1 1062 root sleep infinity
Try killing the sleep infinity
process with SIGHUP
, SIGTERM
and SIGKILL
will result in the same behaviour as previously explained, because this process is running with PID 1 in it's own pid namespace.
# docker exec ubuntu ps x
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 sleep infinity
2 ? R 0:00 ps x
docker stop
does ?Without any fancy option, sends a SIGTERM
to the process running with PID 1 in the container pid namespace. If the process is still running after a 10 seconds timeout, sends a SIGKILL
.
This is why a container that runs a process that does not handle signals properly is slow to stop. The first signal is ignored, the second is not.
Documentation here : Docker stop docs
You can verify it with the commands :
# TIMEFORMAT="==> Execution time = %Rs"
# time docker stop ubuntu
ubuntu
==> Execution time = 10.518s
The simplest way consists in using the --init
option when creating the container, which add a binary developped on the tini GitHub project in the newly created container and run it (with PID 1 in the container pid namespace) and asks it to run as a fork the command to run in the container.
Running the same commands as before show this :
# docker run --init -d --rm --name ubuntu ubuntu sleep infinity
27fc4026c264f48c8ee148796f77e7705411691845e4267467b5bc9f2aba609a
# docker exec ubuntu ps x
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /sbin/docker-init -- sleep infinity
7 ? S 0:00 sleep infinity
8 ? Rs 0:00 ps x
A simple docker stop is very quick, showing that the SIGTERM
signal is handled by the docker-init
process which kills its forks and gracefully stops.
# time docker stop ubuntu
ubuntu
==> Execution time = 0.501s
docker --init
option ?You want to make sure that your init
process declares it's own signal handlers. If you're planning to run a simple sleep infinity
command in your container, you can wrap it a bash script that runs the trap
command prior.
BUT when you run the exec sleep
command from bash, the sleep binary code is run in a blocking way, meaning that it waits to finish before the signals are interpreted again. As a consequence, the trap
command becomes uneffective.
A workaround could consist in using a non blocking (signal responsive) waiting command, like read
when reading from an read/write opened unix pipe created with mkfifo
.
Note that you can simlink a file descriptor to this unix pipe file (and even delete it !) to preserve a non-blocking read
without polluting your container with unecessary pipe files.
This is an example :
#!/bin/bash
trap "exit 0" SIGINT SIGTERM
tmpdir="$(mktemp -d)"
mkfifo "$tmpdir/pipe"
exec 3<>"$tmpdir/pipe"
rm -r "$tmpdir"
read -u3
Put this content in a scripts/run.sh
file on you docker host and do not forget to chmod +x
it.
And now, let's run the whole bunch of commands previously mentionned, using this script as the "init" program, with PID 1 in the container.
# docker run -d --rm -v "$PWD/scripts:/scripts" --name ubuntu ubuntu /scripts/run.sh
8d947443ae6eaf0093378ffb4480c3a67ea221ff240bab251d9f92c9216385f6
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d13fc1da3609 docker.io/library/ubuntu:latest sleep infinity 2 seconds ago Up 3 seconds ubuntu
# lsns
4026532384 pid 1 2551 root /bin/bash /scripts/run.sh
# TIMEFORMAT="==> Execution time = %Rs"
# time docker stop ubuntu
ubuntu
==> Execution time = 0.441s
Here's a quick docker stop
, without the --init
option, mimicking the sleep command with bash, with the necessary signal handling to stop without hard kill. :-)
Short answer : not a good idea. It is the responsibility of the init process (with PID1 in its pid namespace) on a linux system to reap zombie processes forked from it. Of course the given minimalistic bash script above does not this. More informations about zombie processes at : this link
You can spawn a 100 seconds zombie process adding the (sleep 1 & exec sleep 101) &
command before the read
command in the previous bash script and show it with docker exec ubuntu ps fx
.
Your init process in your container must handle signals properly and reap zombie processes. The --init
option in the docker command line ensures that.
I was losing my sanity until I thought of changing the object from a list to a set:
class AuthorAdmin(admin.ModelAdmin):
inlines = ( BookInline, )
I'm using python 3.12 and Django 5