The solution is not matching for the date 14/05/2025 and 14/06/2025, the Nepali date is exact one day less as what getting from function
Finally, this did the trick elegantly. environment: ${{ github.ref == 'refs/heads/master' && 'prod' || github.ref == 'refs/heads/release' && 'release' || 'dev' }}
It means master will use prod environment, release release and everything else (like dev, or PRs) will use 'dev' environment.
When you close and reopen the form, the references to the ChooseFromList
and its associated DataTable
(oDataTable
) become invalid. This happens because the form's UI elements are recreated, and their internal IDs (Unique IDs) may change upon reopening. Therefore, the oDataTable
you are trying to access is not the same one that is currently active in the form.
The UniqueID
of the DataTable
may change between form instances. Instead of checking oDataTable.UniqueID
, you should check the ChooseFromList
UID provided by the event.
Replace:
if (oDataTable.UniqueID == "CFL_Item")
With:
if (oCFLEvento.ChooseFromListUID == "CFL_Item")
Before accessing oDataTable
, check if it is not null to prevent null reference exceptions.
if (oDataTable != null && !oDataTable.IsEmpty)
add xms1g and xmx1g as in the picture
it came from the CN of the certificate and the way i used my commande,
add
allow_anonymous true
to the config file,
Make sure the CN are different for the server and the client, use the server CN in the -h parameter and add the client certificate and the client key
Do you find any solution for that? I don't even have socket.io installed and i get this error
This might not be possible considering that files being locked is a Windows thing and not a C# thing.
But I think if you use FileShare.ReadWrite this allows other users to be able to use the files while you are using them, this might be what you are looking for. It was answered in this post already:
Downlad the jarfile jjwt-api-0.12.6.jar and add it to the dependencies,and it should be resolved.
Don't run ESLint on transpiled output JavaScript files (https://typescript-eslint.io/troubleshooting/faqs/javascript#should-i-run-eslint-on-transpiled-output-javascript-files). Source TypeScript files have all the content of output JavaScript files, plus type annotations. There's no benefit to also linting output JavaScript files.
We ended up with this same problem.... 42 GB of machine key files. So I wrote this powershell: RemoveMachineKeys.ps1. Took a while before it actually started deleting them, but once it did the script blazed through them pretty fast. I added protection from removing IIS machine keys.
I could not use the above answers that depended upon which user created the keys, as these keys were being created in a web site and had the same created by user. I also did not want to care about the application pool name if I did not have to.
You cannot convert from HTML language to "app language". Programming languages are not like speaking languages, and cannot simply be translated. HTML is a completely different type of language than languages that would be used to develop apps. I would suggest using an editor such as NetBeans if you wanted to make an app from scratch, using your current HTML project as a base.
I think the issue related to Multi Language setup , I put the default language Arabic language in Login page process , I changed it to take language from the selected language like this
begin
apex_util.set_session_lang(:P9999_LANGUAGE);
--apex_util.set_session_lang(p_lang => 'ar-ae');
end;
After that I did seed and publish then the new pages running without errors
Thank you all
The screen didn't update properly when I ran VBA code when (A) I had frozen panes in Excel while I also (B) used Application.ScreenUpdating = False in VBA.
Setting Application.ScreenUpdating = True did not work to update the screen in the above. The following two options seemed to always work thus far: (1) Have VBA select a different tab before going back to the desired tab such as
Sheets("Sheet2").Select
Sheets("Sheet1").Select
(2) Turning Excel frozen panes off and then back on also worked
ActiveWindow.FreezePanes = False
ActiveWindow.ScrollRow = 1
ActiveWindow.ScrollColumn = 1
Range("A10").Select
ActiveWindow.FreezePanes = True
Right click on the ellipsis (the three dots) on the GITLENS bar and choose to detach the parts you want to have permanent in the view. Screenprint of context menu of ellipsis on GITLENS bar
A member of the firebase-tools
team has successfully reproduced the bug.
To track the resolution of the bug: https://github.com/firebase/firebase-tools/issues/7946
SELECT * FROM FirstTable AS A JOIN SecondTable AS C ON A.fruit LIKE CONCAT('%', C.fruit, '%');
Basically, for testing, I used the ngrok server to make requests to my Python Flask server and added the HTTPS delivery type subscription to the AWS SNS topic. In this case, for subscription confirmation, it triggered the endpoint, and that endpoint holds the SubscribeURL value. You need to put that value in the confirm subscription. for more details use this link
https://docs.aws.amazon.com/sns/latest/dg/sns-create-subscribe-endpoint-to-topic.html
In Git, if you want to get the location of a file within the repository
git ls-tree git ls-tree -r HEAD --name-only -r recursively lists all files. HEAD is the current branch/commit. --name-only shows only the file paths. --name-only shows only the file paths.
git show --name-only HEAD
I had a similar issue with my Vite-React app. The solutions listed below I believe directly relate to your issue.
Run Express server with my NodeJS app via Azure App Service deployed to Azure Container
The physical path C:\Website\EDIConverterDemo\wwwroot\api\account\getcurrentuser You are only serving static files right now.
It does not look like a CORS issue either.
Overall, I think it is a config issue, The Physical path should be the routing file.
The app is deployed on IIS but where? On a VM or as an Azure Web App, this will be important as the target link and port config might change depending on the IIS environment.
can this same configuration (creating relying party registration bean and configuring saml2 login) using XML based configuration instead of java based configuration. If yes, can you please provide a sample for above
You don't want #13 here. The standard newline indicator for the systems based on POSIX (*NIX, Linux and a lot more) is just #10
. Please see the article on newline: https://en.wikipedia.org/wiki/Newline.
If you have Windows habits, you may want to get rid of some of them.
If you still think you have any reason to have #10#13
, please clarify, but I don't think you will need it. Even the libraries for Windows tend to migrate to accepting just #10
.
The Spring Core Module provides the fundamental features needed to build any Spring application. It includes two lightweight containers, also known as Spring containers or Inversion of Control (IoC) containers:
Spring contexts are also called Spring IoC containers, They are responsible for instantiating, configuring, and assembling beans by reading configuration metadata from XML, Java annotations, and/or Java code in the configuration files. Ref
it is related to primeng V18 upgrade, add [keepInvalid]="true" it will fix the issue
Okay I think I understand now: I can't serve custom error pages because the whole app pool is broken. So .NET nor IIS as a web server can't do anything - no redirect, no execute url, no file content serving, nothing.... It's because the app pool itself isn't working. To make custom error pages work the application pool must be running. In my case the app pool is not running because during the initialization of the app pool there is the exception leading to a broken/not running app pool.
The only way how to do it when I think about it right now is to use a reverse proxy server where I can set custom error pages. Then in this case when a backend isn't working I can serve custom error page from my reverse proxy server.
Or I can use more resilient solution where I will implement a load balancer.
This is an old post and there may be other way for this, but the way I do it is by adding a query shortcut (Tools>Options>Keyboard>Query Shortcuts) such as
Ctrl+3: Select top 1000 * from
then highlight the table name and click ctrl+3 to return the top 1000 rows from that table.
Reasons for Differences:
Rendering Engine: Chrome's Blink engine may handle filters differently on macOS. Color Management: macOS color profiles (e.g., P3 gamut) can affect brightness and hue. Hardware Acceleration: GPU rendering can introduce variations. Solutions:
Test filters across platforms (use tools like BrowserStack). Use @supports (-webkit-appearance: none) for macOS-specific tweaks:
@supports (-webkit-appearance: none) {
.example {
filter: brightness(1.1) hue-rotate(15deg);
}
} Debug with Chrome DevTools. Use SVG filters for more consistent results.
Here's a solution that worked for me:
1- Close your current project in Android Studio.
2- Open the android
directory of your project directly in Android Studio as a standalone project.
By doing this, the android
directory is automatically opened in Open for Editing mode.
Note: The commonly suggested solution of creating a [project_name]_android.iml
file did not work in my case.
I believe that KEYPACH and KEYDIFF were always a THOR only process. You update your key on THOR and then publish to ROXIE. ROXIE is designed to be read-only and a dedicated delivery cluster.
The better way to deploy indexes is to use package maps and superkeys. This blog has all the details of best practices and deployment to ROXIE:
Regards,
Bob
Its a bug. Can't believe no one ever reported this.
Bug: https://github.com/primefaces/primefaces/issues/12887
PR: https://github.com/primefaces/primefaces/pull/12888
It will be fixed in PF 14.0.8+
Check if your AppServiceProvider
boot
method has the required extendSocialite
call (described here). You'll need to add use Illuminate\Support\Facades\Event;
in the file so the call works.
After that, check the saml2 configuration in the config/services.php
file.
I added the following dependencies to package.json:
"react-data-export": "^0.6.0" "xlsx": "^0.17.0" "tempa-xlsx": "0.0.1" Then, I deleted the node_modules and build folders.
After that, I ran these commands in order:
bash Kodu kopyala npm cache clear --force npm install npm run build This resolved the issue I was facing.
It sounds crazy, but I haven't had a normal phone for over 8 years now. I know nothing when it comes to correcting the problem. So, I just try and not worry about it. Curiousity usually gets the best of me. Which is how I ended up here. What I keep wondering is when did Google become incorporated, an llc. I surely believed something different.
PS got to this site asking for info about sec.bcservice.
I'm getting the following ESLint errors for unused variables in my project:
3:10 error 'Tab' is defined but never used no-unused-vars
3:15 error 'Tabs' is defined but never used no-unused-vars
9:8 error 'AssessmentSelector' is defined but never used no-unused-vars
10:8 error 'AssessmentsTable' is defined but never used no-unused-vars
11:8 error 'ErrorBoundaryWrapper' is defined but never used no-unused-vars
12:8 error 'Loader' is defined but never used no-unused-vars
13:8 error 'QuestionnaireDetailedResults' is defined but never used no-unused-vars
14:8 error 'QuestionnaireSummary' is defined but never used no-unused-vars
I have checked my code, and these variables are imported or declared but not used anywhere. How can I fix these errors?
This issue arises because ESLint is enforcing the no-unused-vars
rule, which flags any variables, imports, or components that are defined but not used in your code. This is often helpful to prevent unnecessary or redundant code, but it can be annoying when you are in the process of refactoring or temporarily not using a variable.
Here are several ways to resolve or suppress these errors:
Change ESLint Rule Configuration
If you prefer to keep the unused variables (perhaps for future use or refactoring purposes), you can modify the ESLint configuration to either:
Warn instead of error:
This will prevent the ESLint errors but still notify you about unused variables:
In your .eslintrc.js
or eslint.config.mjs
, change the no-unused-vars
rule to "warn"
:
"no-unused-vars": "warn"
Turn off the rule entirely:
If you don't want ESLint to check for unused variables at all, you can disable the rule entirely:
"no-unused-vars": "off"
In applications developed with .net framework, when you use the connection classes under "System.Net (namespace)" and the version you choose for .net framework is below 4.5, you are likely to encounter an error such as "Request stopped: SSL/TLS secure channel could not be established."
If you add the following code to your form's load procedure or before the connection establishment process to resolve the error, your problem will probably be resolved.
ServicePointManager.SecurityProtocol = (SecurityProtocolType)3072;
It was the solution for me.
I got the same issue as you, just endpoint_management = "CUSTOMER" into your MWAA resource, and it will solve the problem. The terraform document is not well documented regarding this argument.
For .NET 8, there is no need to use Startup.cs. You can embed swagger into Program.cs. Refer to this answer for more details - .NET8 Blazor and Swagger UI
Actually I didn't succeed at running it globally, but I succeeded at running yarn run grunt [my command]
.
Are you sure it's not just slow? Try it with a new project, maybe click on the terminal and press enter to wake it up.
Issue 1: Thank you @suresh, first step forward was adding tsconfig.json file to my build.
Issue 2: I had to get the mjs to cjs since the import keyword does not work in mjs. module: CommonJs to the rescue for my server.ts file which I need to manually rename to cjs after build using renamed.
Issue 3: I had the whole application running in FE, meaning the server runs on FE too, so adding VITE_CONFIG_VARIABLE: "string", ; adding VITE_ keyword to all my config variables was needed.
Issue 4: I had to write a function in a separate file to import config values as some values needed process.env and some needed import.meta.env for importing the variables from .env file.
import "dotenv/config";
const getEnvVar = (key: keyof ImportMetaEnv): string => {
if (
typeof import.meta !== "undefined" &&
import.meta.env &&
import.meta.env[key] !== undefined
) {
return import.meta.env[key] as string;
} else if (process.env[key] !== undefined) {
return process.env[key]!;
}
throw new Error(`Missing required environment variable: ${key}`);
};
export const environmentVariables = {
// Square API Configuration
environment: getEnvVar("VITE_SQUARE_ENVIRONMENT"),
I'm facing same error on Amazon linux server.
go to $JENKINS_HOME/users/users.xml
you can find users names,
the password as i know its automatically generated when you install jenkins .. run the following command to open the file which contains it :
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
is there any update? I got the same error after reinstall python 3.12 appreciate if you can share any update on this Thank you~
ssh-keygen -t rsa << EOD
y
EOD
Don't forget the two spaces.
h1 {
padding: 2px;
background: cyan;
font:bold,italic;
}
span {
background: linear-gradient(102deg, #ffffff00 5%, lightblue 5% 95%, #ffffff00 95%);
display: inline-block;
padding: 5px;
color: #FFF;
margin-left:30px;
padding:10px;
}
<h1> <span> This is a title <span> </h1>
After much debugging, uncertainty, and many hours of suffering, it came down to uninstalling and reinstalling dependency. There was no specific error message or change in git history that I could identify as the source of this. I literally ended up painfully removing code from the App till I discovered the issue with the library.
which dbt version you are in? DBT 1.7 both team and enterprise gives option of job chaining.
To avoid duplicate runs, you can schedule job A to run Monday through Sunday excluding wednesdays as it will run as part of AB job.
Cron schedule for that could be 0 0 * * 1-2,4-7
After that you could set up AB with job chaining like below as given in documentation
I found that using a box-shadow like inset -1px 0 #ccc
for simulating a right border works fine with fixed columns and makes them fully scrollable.
For me preserving the table
's default border-collapse: collapse;
was quite important...
After digging a ton, the issue was caused by the workspace setting "editor.formatOnSaveMode": "modificationsIfAvailable"
. Removing this allowed me to configure the formatter for HTML as Prettier in the workspace without needing to set any User settings.
Not sure why format on save still worked with the same configuration but a different formatter, but at least it works now.
This is due to the behaviour of PRNG. Different code paths might be used. There is no guarantee that all different sequence length will produce exactly the same output samples from the PRNG.
The outputs from 1-15 match while starting from 16 another (probably vectorized) code path will be used. Changes in the sequence length could dispatch to faster code paths
Source: Odd result using multinomial num_samples...
It seems that this may not be an issue in downgraded torch versions.
So I need some help, after heroku updated how their REDIS_URL config variable works (talked about in link #1) my app stopped working. I tried to implement the fix (discussed in link #2) but it isnt working and i am stumped on what else to try.
My REDIS URL variable is saved in the Heroku Server and then it is saved in the Heokru REDIS and is updated automatically about once a month on the REDIS via the Heroku key value store mini add-on. This does not update the variable that is saved on the server and i need to update that every time the Redis one changes but thats a different problem.
Here is how my code looks for this variable in my server.js file
const Redis = require('ioredis');
const redisUrl = process.env.REDIS_URL.includes('?')
? `${process.env.REDIS_URL}&ssl_cert_reqs=CERT_NONE`
: `${process.env.REDIS_URL}?ssl_cert_reqs=CERT_NONE`;
// Create a job queue
const workQueue = new Queue('work', {
redis: {
url: redisUrl
}
});
And here is how my code look for the variable in my worker.js code
const redisUrl = process.env.REDIS_URL.includes('?')
? `${process.env.REDIS_URL}&ssl_cert_reqs=CERT_NONE`
: `${process.env.REDIS_URL}?ssl_cert_reqs=CERT_NONE`;
const workQueue = new Queue('work', {
redis: {
url: redisUrl
}
});
This is the error that shows up in my server logs
2024-11-15T13:06:10.107332+00:00 app[worker.1]: Queue Error: Error: connect ECONNREFUSED 127.0.0.1:6379
2024-11-15T13:06:10.107333+00:00 app[worker.1]: at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1610:16) {
2024-11-15T13:06:10.107333+00:00 app[worker.1]: errno: -111,
2024-11-15T13:06:10.107333+00:00 app[worker.1]: code: 'ECONNREFUSED',
2024-11-15T13:06:10.107333+00:00 app[worker.1]: syscall: 'connect',
2024-11-15T13:06:10.107333+00:00 app[worker.1]: address: '127.0.0.1',
2024-11-15T13:06:10.107333+00:00 app[worker.1]: port: 6379
2024-11-15T13:06:10.107334+00:00 app[worker.1]: }
If already upgraded to 365
=LET(res,CONCAT(TOCOL(IF(A1:A11<>1,ADDRESS(ROW(A1:A11),COLUMN(A1:A11),4,1),1/0),2)&","),
LEFT(res,LEN(res)-1))
I need this file:
https://github.com/giswqs/leafmap/raw/master/examples/data/wind_global.nc.
I have worked with this but lost the file. Plz provide me if available
Don't inject the constructer. Instead create a normal constructor and only inject the repository.
@AndroidEntryPoint
class CustomClass(val name: String) {
@Inject
lateinit val repository: Repository
}
Another thing that wasn't mentioned is that some people may use browser plugins to translate selected text (e.g. Google Translate plugin for Chrome). In this case, the user should be able to select the needed text.
I managed to fix the issue by following these steps:
npx expo start
, so
I just used Ctrl-C)npm cache clean --force
node_modules
folder (rm -fr node_modules
in the project root), and also delete package-lock.json
npm install
.After following these steps, I was able to run my app again!
Foot note to @Luis
If you are using python you can add fields argument the presigned post like this
response = s3_client.generate_presigned_post("bucket",
object_name,
fields={"Content-Type": "application/octet-stream"},
ExpiresIn=3600)
I was able to solve the issue, thanks to Luke's comment. The problem is not just limited to TouchableOpacity, but also affects other components such as Button and Pressable when using the onPress event in headerRight or headerLeft of Stack.Screen. By changing onPress to onPressIn, the issue was resolved for all these components. Here's the updated code:
<Stack.Screen
name="notes/index"
options={{
title: "Dodo",
headerRight: () => (
<TouchableOpacity onPressIn={() => console.log("Button pressed!")}>
<Text>Click Me</Text>
</TouchableOpacity>
),
}}
/>
Thanks again to Luke for pointing me in the right direction!
I haved the same problem, just restart de database with "SQL SERVER CONFIGURATION MANAGER", and that it's true "MS SQL Server never store your password for security reason. MS SQL Server store only the HASH of your password.
Therefore settings form can't shown the password. Instead it shows some mysterious 15 character.", the password is the same that you set up, just need to restart de database server
https://docs.vespa.ai/en/operations-selfhosted/multinode-systems.html and https://docs.vespa.ai/en/operations-selfhosted/config-sentinel.html#cluster-startup are useful to understand the start sequence - in short, make sure the config server(s) is/are started first, making sure they run, then other pods can start, configured with config server locations.
https://github.com/vespa-engine/sample-apps/tree/master/examples/operations/multinode-HA/gke is also useful
Finally, your log message above indicates that you have not deployed the application to Vespa: "No application exists" - see https://docs.vespa.ai/en/application-packages.html - so there are no services to start
My understanding is that your qas branch is out of sync with the origin I would try to debug and resolve the state, by investigate the logs.
git fetch git log origin/qas --oneline
git log qas --oneline
Maybe it shows you why git is confused.
If you know that you dont need the other branch current state and dont care that others may hate you, you can always force =)
I've tried solving this with the following terraform code snippet
provider "databricks" {
alias = "account"
account_id = "00000000-0000-0000-0000-000000000000"
host = "https://accounts.azuredatabricks.net"
}
provider "databricks" {
account_id = "00000000-0000-0000-0000-000000000000"
host = module.databricks.workspace_url
}
locals {
workspace_user_groups = toset([
"my_account_group",
])
}
data "databricks_group" "workspace_user_groups" {
provider = databricks.account
for_each = local.workspace_user_groups
display_name = each.value
}
resource "databricks_permission_assignment" "workspace_user_groups" {
for_each = local.workspace_user_groups
principal_id = data.databricks_group.workspace_user_groups[each.key].id
permissions = ["USER"]
}
resource "databricks_group" "workspace_user_groups" {
depends_on = [databricks_permission_assignment.workspace_user_groups]
for_each = local.workspace_user_groups
display_name = each.value
}
but this fails with a claim issue like the following when reading the account groups:
Error: cannot read group: io.jsonwebtoken.IncorrectClaimException: Expected iss claim to be: https://sts.windows.net/9652d7c2-1ccf-4940-8151-4a92bd474ed0/, but was: https://sts.windows.net/4ed310c5-f7a0-49ec-982b-34aeeeaea662/
anyone knows what's the issue here ?
Ooh. Thanks tkausl. It seems I just forgot to call lambda. Here is fixed snippet:
template<typename... Args>
std::vector<std::shared_ptr<int>> createSourceVector(Args... args)
{
std::vector<std::shared_ptr<int>> result;
(
[&]() {
result.push_back(std::make_shared<int>(args));
}(),
...);
return result;
}
I stumbled upon this old thread whilst looking for a solution, in the end I just changed the 'protected $page' class variable to 'public $page' and then you can just change the current page with $pdf->page = 1;
I'm having the same issue. As far i know, that featured thingy is a premium feature in Artifactory. Are you using the 'oss' version or the 'pro' one?
Because you do not check for:
fgets(string, 256, fp) != NULL
You do not write something in your variable anymore but you did not reach EOF so you continue to print the last known value of string.
You have to specify the column type as a DataGridViewComboBoxColumn
var relatedColumn = (DataGridViewComboBoxColumn)paymentTable.Columns[0];
// 0 is your name column index
relatedColumn.Items.Add("New Item");
I suggest you follow the migration guide from the ESLint documentation. You can start using the configuration migrator on your existing configuration file.
Possibly, the system is still looking for python3.8. Have you exported python3.13 to the system path in your bashrc
?
Also, you can have multiple Python versions at once. A common practice is to work with a Python virtual environment, which you can set up with any of the versions available on your system.
See https://docs.python.org/3/library/venv.html
I found the answer!!! I saw it on JetBrain's official site. (Here)
Basically, In Settings > Plugins, Disable IdeaVim. At first, it disabled my ability to select altogether, but then I restarted Rider and it worked ok.
What you are locking for is the property transform: skew()
.
Just give the container its value and its child elements the opposite one.
.skew {
background: #ff00ff;
text-align: center;
padding: 1rem;
width: 60%;
transform: skew(-20deg);
margin: 0 auto;
}
h1 {
transform: skew(20deg);
}
<div class="skew">
<h1>Hello World!</h1>
</div>
You also have to add the recipe to the image you are building. In one of the layers there should be a st-image-qt.bb file. It should be in <layername>/recipes-core/images
. There, you have to add it with IMAGE_INSTALL:append = " mygui "
(Note the space after the "). Only then the recipe is included in the image, only adding the layer is not enough.
Replace YOURFIELD with your field. And you can omit the ISNULL part if you like.
SELECT ISNULL(CAST(CAST(YOURFIELD AS VARBINARY(MAX)) AS
NVARCHAR(MAX)),'NA') AS YOURFIELD FROM YOURTABLE
After some testing i found a solution, and it works. If anyone has any alternative and better method please let me know. Thank you :)
#include <gtk/gtk.h>
gboolean single = TRUE;
gboolean longPress = FALSE;
void click_event (GtkGesture *gesture,
int n_press,
gdouble x,
gdouble y,
gpointer user_data)
{
if (n_press > 1) single = FALSE;
longPress = FALSE;
}
void stopp_event (GtkGesture *gesture, gpointer user_data)
{
if (single == FALSE){
g_print("Double click\n");
}else {
if (longPress == FALSE)
g_print("Single click\n");
}
single = TRUE;
longPress = TRUE;
}
void long_press (GtkGestureLongPress* self,
gdouble x,
gdouble y,
gpointer user_data)
{
g_print("long pressed\n");
}
static void
activate (GtkApplication* app,
gpointer user_data)
{
GtkWidget *window;
window = gtk_application_window_new (app);
gtk_window_set_title (GTK_WINDOW (window), "Window");
gtk_window_set_default_size (GTK_WINDOW (window), 200, 200);
gtk_window_present (GTK_WINDOW (window));
GtkGesture *gesture = gtk_gesture_click_new();
gtk_gesture_single_set_button(GTK_GESTURE_SINGLE(gesture), GDK_BUTTON_PRIMARY);
gtk_widget_add_controller(window,(GTK_EVENT_CONTROLLER(gesture)));
g_signal_connect (gesture, "released", G_CALLBACK (click_event), NULL);
g_signal_connect (gesture, "stopped",G_CALLBACK(stopp_event), NULL);
GtkGesture* gesture_long_press = gtk_gesture_long_press_new();
gtk_gesture_single_set_button(GTK_GESTURE_SINGLE(gesture_long_press), GDK_BUTTON_PRIMARY);
gtk_gesture_single_set_exclusive (GTK_GESTURE_SINGLE (gesture_long_press), TRUE);
gtk_event_controller_set_propagation_phase((GtkEventController *)gesture_long_press, GTK_PHASE_CAPTURE);
gtk_gesture_long_press_set_delay_factor ((GtkGestureLongPress *)gesture_long_press, 1);
gtk_widget_add_controller (window, GTK_EVENT_CONTROLLER (gesture_long_press));
g_signal_connect (gesture_long_press, "pressed", G_CALLBACK (long_press), NULL);
gtk_window_present ((GtkWindow *)window);
}
int
main (int argc,
char **argv)
{
GtkApplication *app;
int status;
#if GLIB_CHECK_VERSION(2, 74, 0)
app = gtk_application_new ("org.gtk.example", G_APPLICATION_DEFAULT_FLAGS);
#else
app = gtk_application_new ("org.gtk.example", G_APPLICATION_FLAGS_NONE);
#endif
g_signal_connect (app, "activate", G_CALLBACK (activate), NULL);
status = g_application_run (G_APPLICATION (app), argc, argv);
g_object_unref (app);
return status;
}
You can add geom_vline(xintercept = c(-0.75, 1.00, 2.75, 4.25, 6.00, 7.75), linewidth = 3, color = 'gray92') +
below your firt geom_vline
Every EventCard
uses the same ViewModel
so the same PictureViewModel.bitmap
is used in every card. You should save the Bitmap
in the Event
model, so each Event will have its Image.
I think i have solution for query. I've tried this way it will work for you.
here are some steps of implementation of Dependency Injection.
1. Add Hilt Dependencies Add the necessary dependencies in your build.gradle file:.
Add this into your app build.gradle.kts file
dependencies {
ksp("com.google.dagger:hilt-compiler:2.48") // for dagger
implementation("com.google.dagger:hilt-android:2.48") // for hilt
implementation("androidx.lifecycle:lifecycle-viewmodel-ktx:2.6.1") // viewmodel
implementation("com.squareup.retrofit2:converter-gson:2.9.0") // retrofit
implementation("com.google.code.gson:gson:2.10.1") // gson
}
And add this into your project build.gradle.kts file
plugins {
id("com.android.library") version "8.0.2" apply false
id("com.google.dagger.hilt.android") version "2.48" apply false
id("com.google.devtools.ksp") version "1.9.0-1.0.13" apply false
}
Add this plugins id to your app build.gradle.kts file
plugins {
id("com.google.dagger.hilt.android")
id("com.google.devtools.ksp")
}
Okay perfect now , we have completed our dependency step.
We will head to implementation step.
2. Initialize Hilt in the Application Class Annotate your Application class with @HiltAndroidApp
@HiltAndroidApp
class WeatherApplication : Application()
3.Create a Network Module Define a Hilt module to provide dependencies like Retrofit and OkHttpClient.
import dagger.Module
import dagger.Provides
import dagger.hilt.InstallIn
import dagger.hilt.components.SingletonComponent
import retrofit2.Retrofit
import retrofit2.converter.gson.GsonConverterFactory
import javax.inject.Singleton
@Module
@InstallIn(SingletonComponent::class)
object NetworkModule {
@Provides
@Singleton
fun provideRetrofit(): Retrofit {
return Retrofit.Builder()
.baseUrl("https://api.weatherapi.com/v1/")
.addConverterFactory(GsonConverterFactory.create())
.build()
}
@Provides
@Singleton
fun provideWeatherApi(retrofit: Retrofit): WeatherApi {
return retrofit.create(WeatherApi::class.java)
}
}
4. Create an API Interface Define an interface for the API endpoints.
import retrofit2.http.GET
import retrofit2.http.Query
interface WeatherApi {
@GET("forecast.json")
suspend fun getCurrentWeather(
@Query("key") apiKey: String,
@Query("q") location: String,
@Query("days") days: Int,
@Query("aqi") aqi: String,
@Query("alerts") alerts: String
): WeatherResponse
}
5. Create a Repository Use the WeatherApi in a repository class. Mark the class with @Inject to enable dependency injection.
import javax.inject.Inject
class WeatherRepository @Inject constructor(private val api: WeatherApi) {
suspend fun fetchWeather(location: String): WeatherResponse {
return api.getCurrentWeather("your-api-key", location, 7, "yes", "yes")
}
}
6. Create a ViewModel Use the repository in your ViewModel. Annotate the ViewModel with @HiltViewModel.
import androidx.lifecycle.LiveData
import androidx.lifecycle.MutableLiveData
import androidx.lifecycle.ViewModel
import androidx.lifecycle.viewModelScope
import dagger.hilt.android.lifecycle.HiltViewModel
import kotlinx.coroutines.launch
import javax.inject.Inject
@HiltViewModel
class WeatherViewModel @Inject constructor(
private val repository: WeatherRepository
) : ViewModel() {
private val _weatherData = MutableLiveData<WeatherResponse>()
val weatherData: LiveData<WeatherResponse> get() = _weatherData
fun loadWeather(location: String) {
viewModelScope.launch {
try {
val weather = repository.fetchWeather(location)
_weatherData.value = weather
} catch (e: Exception) {
// Handle error
}
}
}
}
7. Inject Dependencies in an Activity or Fragment Use the @AndroidEntryPoint annotation to enable dependency injection in your activity or fragment.
import android.content.Intent
import android.os.Bundle
import android.util.Log
import android.widget.Toast
import androidx.activity.enableEdgeToEdge
import androidx.activity.viewModels
import androidx.appcompat.app.AppCompatActivity
import androidx.core.view.ViewCompat
import androidx.core.view.WindowInsetsCompat
import com.example.test.databinding.ActivityMainBinding
import com.example.test.di.WeatherViewModel
import dagger.hilt.android.AndroidEntryPoint
@AndroidEntryPoint
class WeatherActivity : AppCompatActivity() {
private val viewModel: WeatherViewModel by viewModels()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_weather)
viewModel.weatherData.observe(this) { weather ->
// Update UI with weather data
}
// Fetch weather for a location
viewModel.loadWeather("New York")
}
}
Remember one thing that WeatherResponse is your data class and that could a lot longer so i've mentioned here.
how can we update the picklist field with parameter value in the ADO through ADO yaml pipeline
matplotlib.rcdefaults(). And, on the same page
matplotlib.style.use('default')
orrcdefaults()
to restore the defaultrcParams
after changes
The updates I made based on a friend's answer worked. I wish he hadn't deleted the answer. I would consider it valid. Unfortunately, someone gave -1 point and he deleted the answer.
I change it: IIS -> Application Pools -> Advanced Settings
Identity = NetworkService
and I add code base:
System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls | SecurityProtocolType.Ssl3;
my problem is solved.
You could:
Buffer points - it looks like your points are fairly uniformly spaced, so buffer by their spacing (radius + 1 metre) so they overlap with their neighbour.
Dissolve resulting buffer points, specifying your "abk" field. This will create a separate multipart polygon for each "abk" code.
Set up the labelling to label each multipart, and set symbology to "no symbology" and you will have something like the below.
Might need to play around with it a bit but hopefully should get you there.
Final points grid
pip install pytest-shutil
please do you know what the conditions would be for a three-digit tape? Sum of 3 numbers? For example 101_10_110.
I am looking for same solution. Have you got any leads?
solution can be found from this comment
pip3 install ls5 the pypi package, view the code and you'll find what you need.
sudo nano /etc/docker/daemon.json Add the following configuration to disable IPv6 { "ipv6": false } After modifying the daemon.json file or disabling IPv6 on the host, restart the Docker service to apply the changes sudo systemctl restart docker
I think I've found a solution, which is though not as convenient as direct indexing:
i, j = mask.nonzero(as_tuple = True)
x[i, j, :, :3] = y[i, j]
However, in situation that shape of the masked dimensions are unknown, an additional reshaping will be involved to squeeze the unknown dimensions in one, which still cause extra time and memory. So I still think that it is nonsense that mask and slice assignment cannot be used simultaneously.
I found an answer here.
A possible way to do this is:
expect(repository).to receive(:clone) do |**kwargs|
expect(kwargs).not_to include(:branch)
end
Admittedly not as neat as a single matcher but it get the job done.
Using MANAGE_EXTERNAL_STORAGE is not a viable solution, especially if you’re planning to publish your app on the Play Store, as it will likely get rejected. The proper way to save files on Android 34 is by using the MediaStore API. It doesn’t require any special permissions to write to public folders.
I’ve developed a Flutter plugin that solves this issue for you. Feel free to give it a try, and if you encounter any problems, you can open an issue on the repository. I’d be happy to help!
How it works? It works exactly as you described. And the behavior you described is perfectly expected. Let's see.
Nothing prevents the user from typing whatever this person wants, even not a number. If you need to write code guarded against incorrect input, you need to read not even a number, but the string, try to parse it into a number in the required domain of values and handle all the unsuitable input accordingly.
The purpose of subrange types is completely different. This is a static (that is, based on compile-time data) feature. In other words, the valid range is known statically. First, it allows the compiler to choose the underlying integer type automatically, based on the range known from the code during compile-time. Also, it makes necessary compile-time checks and validates that the range and operations like assignment or comparison operations are compatible. This check can work with constants or immediate constants (the examples of immediate constants are created when you write the literals 41 and 1 under your IF
and REPEAT… UNTIL
) statements. You have described one of the situations.
In other words,
VAR
j: 1..40;
//..
j := 41; // compile-time error, failure to build the code
//..
IF (j <= 40) AND (j >= 1) //... compile-time warning: it is statically
// analyzed that you cannot do, for example, assignment j := 41,
// therefore, the comparison operator will always return true,
// so, the IF condition is always met
In this sense, the subrange types are extremely useful.
The data entered by the user is the run-time data. It has nothing to do with the subranges. In the case of your subrange type, the input is interpreted according to the underlying type created during the build of the code.
If types of parameters are the same, may be easier to create a vector (to iterate by indexing) or map (to iterate by names) and iterate over it?
Using macOS Sequoia & postgres version 15
Add following line to your bash file:
export PATH="/usr/local/opt/postgresql@15/bin:$PATH"
This error also happens when writing into a folder without specifying the name of the file to be created
solution ...\filename.txt'
Use the --debug-sql
option of the Django test runner.
Enables SQL logging for failing tests. If --verbosity is 2, then queries in passing tests are also output.
./manage.py test --debug-sql
Great. the SECOND I look onto the post, I find my typos in MoneyForm.
passing {{control}}
instead of {control}
to my TextInputForm
. Duhh!
Not that I haven't stared at the code for an hour before writing all this up :-)
Hilarious.
What I was missing was to set ZBX_SERVER_HOST to the name of the service defined in the docker-compose file.
Maybe I am missing a point but I believe ZBX_SERVER_HOST should be present in the docker-compose example because the default localhost
that ZBX_SERVER_HOST goes to will result in a correct connection in most containerized configuration of Zabbix web server + Zabbix server.
In my case the relevant part of the docker compose is:
services:
zabbix-server:
...
zabbix-web:
environment:
...
- ZBX_SERVER_HOST=zabbix-server
Good news! Fixed the problem, thanks for the answers. here the function now:
size_t
TileSet_encode (struct TileSet_s *tileset,
bytes_t **bytes, size_t bytes_offset, size_t bytes_size)
{
// next we collect the vertices, duplicated aren't stored so we need to collect
// them and then calculate.
struct UnitSet_s set;
if (!UnitSet_init(&set))
{
return 0;
}
for (TileSet_Size_t ti = 0; ti < tileset->count; ++ti)
{ // translation of this stuff:
// iterate each tile and iterate each vertex of each tile.
for (unsigned short vi = 0; vi < TILE_VERTICES_MAX; ++vi)
{ // dump the vertex into the set, if we fail terminate operation.
if (UnitSet_add(&set, tileset->tilearray[ti]->tiledata.vertices[vi]) == UNIT_SET_NOMEM)
{
UnitSet_destroy(&set);
return 0;
}
}
}
// size is ok. :P
size_t size_tiles = tileset->count * TILE_ENCODED_SIZE;
size_t size_vertices = sizeof(double) * set.list.length;
size_t bytes_to_write = 8 + size_tiles + size_vertices;
if ((bytes_size - bytes_offset) < bytes_to_write)
{ // now we know how much space the whole thing takes.
// ensure to increase space if needed.
bytes_size = bytes_offset + ((bytes_size - bytes_offset) + bytes_to_write);
bytes_t *newbytes = realloc(*bytes, sizeof(**bytes) * bytes_size);
if (!newbytes)
{ // failed to increase bytes buffer.
return 0;
}
*bytes = newbytes;
}
// ******************************************************************
// improvements from here.
uint16_t chunk_x = 20;
uint16_t chunk_y = 40;
memset((*bytes) + bytes_offset, 0xAA, bytes_size - bytes_offset); // TEST, remove garbage.
memcpy((*bytes) + bytes_offset, &chunk_x, sizeof(uint16_t));
memcpy((*bytes) + bytes_offset + 0x02, &chunk_y, sizeof(uint16_t));
memcpy((*bytes) + bytes_offset + 0x04, &tileset->count, sizeof(uint16_t));
for (TileSet_Size_t tnum = 0; tnum < tileset->count; ++tnum)
{
size_t bytes_tile_offset = bytes_offset + 0x08 + (tnum * TILE_ENCODED_SIZE);
struct TileSet_Tile_s *tiledata = tileset->tilearray[tnum];
uint8_t tile_id = (uint8_t) tiledata->id;
memcpy((*bytes) + bytes_tile_offset, &tile_id, sizeof(tile_id));
for (size_t vi = 0; vi < TILE_VERTICES_MAX; ++vi)
{
uint16_t tile_vidx = (uint16_t) vi;
memcpy((*bytes) + bytes_tile_offset + 1 + (vi * sizeof(uint16_t)),
&tile_vidx, sizeof(tile_vidx));
}
}
for (size_t vi = 0; vi < set.list.length; ++vi)
{
double unit = (double) set.list.array[vi];
memcpy((*bytes) + bytes_offset + 0x08 + size_tiles + (sizeof(unit) * vi),
&unit, sizeof(unit));
}
// ******************************************************************
UnitSet_destroy(&set);
return bytes_to_write;
}
In Short
The result buffer is what I want. The function needs some extra tweaks and over-all it works.
For filtering, sure! Use Filters:
If sheet protection doesn't work, I'm afraid there is no way to protect the PT.
Resolved it myself, as there was no response.
Firebase Hosting doesn't support Quartz directly as it supported Gatsby
So, i had to host the site via Cloud Run using Caddy and then had to do firebase deploy --only hosting
Quite a learning. Thanks